Super Resolution Algorithm for CCTVs
Gohshi, Seiichi
2015-03-01
Recently, security cameras and CCTV systems have become an important part of our daily lives. The rising demand for such systems has created business opportunities in this field, especially in big cities. Analogue CCTV systems are being replaced by digital systems, and HDTV CCTV has become quite common. HDTV CCTV can achieve images with high contrast and decent quality if they are clicked in daylight. However, the quality of an image clicked at night does not always have sufficient contrast and resolution because of poor lighting conditions. CCTV systems depend on infrared light at night to compensate for insufficient lighting conditions, thereby producing monochrome images and videos. However, these images and videos do not have high contrast and are blurred. We propose a nonlinear signal processing technique that significantly improves visual and image qualities (contrast and resolution) of low-contrast infrared images. The proposed method enables the use of infrared cameras for various purposes such as night shot and poor lighting environments under poor lighting conditions.
Anaphora Resolution Algorithm for Sanskrit
Pralayankar, Pravin; Devi, Sobha Lalitha
This paper presents an algorithm, which identifies different types of pronominal and its antecedents in Sanskrit, an Indo-European language. The computational grammar implemented here uses very familiar concepts such as clause, subject, object etc., which are identified with the help of morphological information and concepts such as precede and follow. It is well known that natural languages contain anaphoric expressions, gaps and elliptical constructions of various kinds and that understanding of natural languages involves assignment of interpretations to these elements. Therefore, it is only to be expected that natural language understanding systems must have the necessary mechanism to resolve the same. The method we adopt here for resolving the anaphors is by exploiting the morphological richness of the language. The system is giving encouraging results when tested with a small corpus.
A Simple Two Aircraft Conflict Resolution Algorithm
Chatterji, Gano B.
2006-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.
Improved Interpolation Kernels for Super-resolution Algorithms
DEFF Research Database (Denmark)
Rasti, Pejman; Orlova, Olga; Tamberg, Gert
2016-01-01
Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Maximum likelihood positioning algorithm for high-resolution PET scanners
International Nuclear Information System (INIS)
Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar
2016-01-01
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions
A Denotational Semantics for Logic Programming
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg
A fully abstract denotational semantics for logic programming has not been constructed yet. In this paper we present a denotational semantics that is almost fully abstract. We take the meaning of a logic program to be an element in a Plotkin power domain of substitutions. In this way our result...... shows that standard domain constructions suffice, when giving a semantics for logic programming. Using the well-known fixpoint semantics of logic programming we have to consider two different fixpoints in order to obtain information about both successful and failed computations. In contrast, our...... semantics is uniform in that the (single) meaning of a logic program contains information about both successful, failed and infinite computations. Finally, based on the full abstractness result, we argue that the detail level of substitutions is needed in any denotational semantics for logic programming....
Resolution recovery for Compton camera using origin ensemble algorithm
Energy Technology Data Exchange (ETDEWEB)
Andreyev, A. [Philips Healthcare, Highland Heights, Ohio 44143 (United States); Celler, A. [Medical Imaging Research Group, University of British Columbia and Vancouver Coastal Health Research Institute, Vancouver, BC V5Z 1M9 (Canada); Ozsahin, I.; Sitek, A., E-mail: sarkadiu@gmail.com [Gordon Center for Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)
2016-08-15
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image
Resolution recovery for Compton camera using origin ensemble algorithm
International Nuclear Information System (INIS)
Andreyev, A.; Celler, A.; Ozsahin, I.; Sitek, A.
2016-01-01
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image
The Chorus Conflict and Loss of Separation Resolution Algorithms
Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.
2013-01-01
The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.
A Denotational Semantics for Communicating Unstructured Code
Directory of Open Access Journals (Sweden)
Nils Jähnig
2015-03-01
Full Text Available An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.
Super-Resolution Algorithm in Cumulative Virtual Blanking
Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.
2008-11-01
The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.
A Turn-Projected State-Based Conflict Resolution Algorithm
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
Denotative and connotative meanings of paintings
Directory of Open Access Journals (Sweden)
Vasić Sandra
2007-01-01
Full Text Available In this study the relationships between judgments of paintings denotative and connotative meanings was investigated. Denotative domain was defined as motif (represented object, e.g. portrait, landscape etc. and message (information carried by paintings, e.g. celebration of patriotism. Connotative domain was defined as subjective experience, i.e. affective or metaphoric impression produced by painting (e.g. feeling of pleasure, impression of dynamics, and so on. In preliminary study the list of 39 motifs was specified empirically. The four dimensions of pictorial message were taken from the previous study (Marković, 2006: Subjectivism, Ideology, Decoration and Constructivism vs. Realism. The four dimensions of paintings subjective experience were taken from the previous study as well (Radonjić and Marković, 2005: Regularity, Attraction, Arousal and Relaxation. In Experiment 1 subjects were asked to associate 39 motifs with 18 paintings. In Experiment 2 subjects were asked to judge 24 paintings on four dimensions of pictorial message. Results form Experiment 1 have shown that dimensions of paintings subjective experience were significantly correlated with only five motifs (e.g. everyday life was negatively correlated with Arousal, battle was negatively correlated with Relaxation, and so on. Results from Experiment 2 have shown that Subjectivism and Constructivism are negatively correlated with Regularity, and positively correlated with Arousal. Decoration is negatively correlated with Arousal and positively with Attraction and Relaxation.
Vela, Adan Ernesto
2011-12-01
From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal
Airport Traffic Conflict Detection and Resolution Algorithm Evaluation
Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.
2016-01-01
Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.
A Super-resolution Reconstruction Algorithm for Surveillance Video
Directory of Open Access Journals (Sweden)
Jian Shao
2017-01-01
Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.
Principle and Reconstruction Algorithm for Atomic-Resolution Holography
Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi
2018-06-01
Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data
International Nuclear Information System (INIS)
Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P.A.; Schmid, Adrien W.
2016-01-01
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data
Energy Technology Data Exchange (ETDEWEB)
Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.
Denotational Aspects of Untyped Normalization by Evaluation
DEFF Research Database (Denmark)
Filinski, Andrzej; Rohde, Henning Korsholm
2005-01-01
of soundness (the output term, if any, is in normal form and ß-equivalent to the input term); identification (ß-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet...... formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like, call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness...
SURF IA Conflict Detection and Resolution Algorithm Evaluation
Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.
2012-01-01
The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives [1]. Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements [3]. The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit [4].
Airport Traffic Conflict Detection and Resolution Algorithm Evaluation
Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Otero, Sharon D.; Barker, Glover D.
2012-01-01
A conflict detection and resolution (CD&R) concept for the terminal maneuvering area (TMA) was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. The CD&R concept is being designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The purpose of the study was to evaluate the performance of aircraft-based CD&R algorithms in the TMA, as a function of surveillance accuracy. This paper gives an overview of the CD&R concept, simulation study, and results. The Next Generation Air Transportation System (NextGen) concept for the year 2025 and beyond envisions the movement of large numbers of people and goods in a safe, efficient, and reliable manner [1]. NextGen will remove many of the constraints in the current air transportation system, support a wider range of operations, and provide an overall system capacity up to three times that of current operating levels. Emerging NextGen operational concepts [2], such as four-dimensional trajectory based airborne and surface operations, equivalent visual operations, and super density arrival and departure operations, require a different approach to air traffic management and as a result, a dramatic shift in the tasks, roles, and responsibilities for the flight deck and air traffic control (ATC) to ensure a safe, sustainable air transportation system.
International Nuclear Information System (INIS)
Kacarska, Marija; Loskovska, Suzana
2002-01-01
In this paper comparative analysis between different EIT algorithms is presented. Analysis is made for spatial and temporal resolution of obtained images by several different algorithms. Discussions consider spatial resolution dependent on data acquisition method, too. Obtained results show that conventional applied-current EIT is more powerful compared to induced-current EIT. (Author)
The Effect of Swarming on a Voltage Potential-Based Conflict Resolution Algorithm
Maas, J.B.; Sunil, E.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.
2016-01-01
Several conflict resolution algorithms for airborne self-separation rely on principles derived from the repulsive forces that exist between similarly charged particles. This research investigates whether the performance of the Modified Voltage Potential algorithm, which is based on this algorithm,
International Nuclear Information System (INIS)
Oh, Yu Whan; Kim, Jung Kyuk; Suh, Won Hyuck
1994-01-01
To date, the high spatial frequency algorithm (HSFA) which reduces image smoothing and increases spatial resolution has been used for the evaluation of parenchymal lung diseases in thin-section high-resolution CT. In this study, we compared the ultrahigh spatial frequency algorithm (UHSFA) with the high spatial frequency algorithm in the assessment of thin section images of the lung parenchyma. Three radiologists compared the UHSFA and HSFA on identical CT images in a line-pair resolution phantom, one lung specimen, 2 patients with normal lung and 18 patients with abnormal lung parenchyma. Scanning of a line-pair resolution phantom demonstrated no difference in resolution between two techniques but it showed that outer lines of the line pairs with maximal resolution looked thicker on UHSFA than those on HSFA. Lung parenchymal detail with UHSFA was judged equal or superior to HSFA in 95% of images. Lung parenchymal sharpness was improved with UHSFA in all images. Although UHSFA resulted in an increase in visible noise, observers did not found that image noise interfered with image interpretation. The visual CT attenuation of normal lung parenchyma is minimally increased in images with HSFA. The overall visual preference of the images reconstructed on UHSFA was considered equal to or greater than that of those reconstructed on HSFA in 78% of images. The ultrahigh spatial frequency algorithm improved the overall visual quality of the images in pulmonary parenchymal high-resolution CT
Scalable Algorithms for Large High-Resolution Terrain Data
DEFF Research Database (Denmark)
Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan
2010-01-01
In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...
DENOTATIVE ORIGINS OF ABSTRACT IMAGES IN LINGUISTIC EXPERIMENT
Directory of Open Access Journals (Sweden)
Elina, E.
2017-03-01
Full Text Available The article discusses the refusal from denotation (the subject, as the basic principle of abstract images, and semiotic problems arising in connection with this principle: how to solve the contradiction between the pointlessness and iconic nature of the image? Is it correct in the absence of denotation to recognize abstract representation of a single-level entity? The solution is proposed to decide these questions with the help of a psycholinguistic experiment in which the verbal interpretation of abstract images made by both experienced and “naive” audience-recipients demonstrates the objectivity of perception of denotative “traces” and the presence of denotative invariant in an abstract form.
Nuclear denotation: a topic for global public health concern
International Nuclear Information System (INIS)
Wiwanitkit, Viroj
2011-01-01
In mid of March 2011, a big Tsunami attacked Japan and caused serious destruction. In addition to the destroyed infrastructure, disruption of the nuclear plants occurred and this is the origin of the big problem of nuclear denotation which is of present concern. Nuclear denotation is an actually interesting new problem that affects a large group of world population. This situation is new and requires our attention in a global level. In this article, the author summarizes and discusses this important topic
A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video
Directory of Open Access Journals (Sweden)
Zhang Liangpei
2007-01-01
Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.
Othman, Khairulnizam; Ahmad, Afandi
2016-11-01
In this research we explore the application of normalize denoted new techniques in advance fast c-mean in to the problem of finding the segment of different breast tissue regions in mammograms. The goal of the segmentation algorithm is to see if new denotes fuzzy c- mean algorithm could separate different densities for the different breast patterns. The new density segmentation is applied with multi-selection of seeds label to provide the hard constraint, whereas the seeds labels are selected based on user defined. New denotes fuzzy c- mean have been explored on images of various imaging modalities but not on huge format digital mammograms just yet. Therefore, this project is mainly focused on using normalize denoted new techniques employed in fuzzy c-mean to perform segmentation to increase visibility of different breast densities in mammography images. Segmentation of the mammogram into different mammographic densities is useful for risk assessment and quantitative evaluation of density changes. Our proposed methodology for the segmentation of mammograms on the basis of their region into different densities based categories has been tested on MIAS database and Trueta Database.
A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning
Directory of Open Access Journals (Sweden)
Tariq Jamil Saifullah Khanzada
2011-10-01
Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.
A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution
Directory of Open Access Journals (Sweden)
LIU Wanke
2016-02-01
Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.
Multi-resolution inversion algorithm for the attenuated radon transform
Barbano, Paolo Emilio
2011-09-01
We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE.
An Example-Based Super-Resolution Algorithm for Selfie Images
Directory of Open Access Journals (Sweden)
Jino Hans William
2016-01-01
Full Text Available A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR rear camera and a low-resolution (LR front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details.
HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation
Guo, Shuhang; Wang, Jian; Wang, Tong
2017-09-01
Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.
Denotational semantics of recursive types in synthetic guarded domain theory
DEFF Research Database (Denmark)
Møgelberg, Rasmus Ejlers; Paviotti, Marco
2016-01-01
typed lambda calculus with fixed points). This model was intensional in that it could distinguish between computations computing the same result using a different number of fixed point unfoldings. In this work we show how also programming languages with recursive types can be given denotational...
High-resolution finite-difference algorithms for conservation laws
International Nuclear Information System (INIS)
Towers, J.D.
1987-01-01
A new class of Total Variation Decreasing (TVD) schemes for 2-dimensional scalar conservation laws is constructed using either flux-limited or slope-limited numerical fluxes. The schemes are proven to have formal second-order accuracy in regions where neither u/sub x/ nor y/sub y/ vanishes. A new class of high-resolution large-time-step TVD schemes is constructed by adding flux-limited correction terms to the first-order accurate large-time-step version of the Engquist-Osher scheme. The use of the transport-collapse operator in place of the exact solution operator for the construction of difference schemes is studied. The production of spurious extrema by difference schemes is studied. A simple condition guaranteeing the nonproduction of spurious extrema is derived. A sufficient class of entropy inequalities for a conservation law with a flux having a single inflection point is presented. Finite-difference schemes satisfying a discrete version of each entropy inequality are only first-order accurate
Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-11-07
This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.
Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging
Directory of Open Access Journals (Sweden)
Tianzhu Yi
2017-11-01
Full Text Available This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR data processing. Several nonlinear chirp scaling (NLCS algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC. However, the azimuth depth of focusing (ADOF is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS algorithm that is proposed in this paper uses the method of series reverse (MSR to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs
Directory of Open Access Journals (Sweden)
Yu Zheng
2017-06-01
Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.
Improving the resolution for Lamb wave testing via a smoothed Capon algorithm
Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong
2018-04-01
Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.
High resolution reconstruction of PET images using the iterative OSEM algorithm
International Nuclear Information System (INIS)
Doll, J.; Bublitz, O.; Werling, A.; Haberkorn, U.; Semmler, W.; Adam, L.E.; Pennsylvania Univ., Philadelphia, PA; Brix, G.
2004-01-01
Aim: Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. Methods: All measurements were performed at the whole-body PET system ECAT EXACT HR + in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Results: Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. Conclusion: The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals. (orig.)
Single image super resolution algorithm based on edge interpolation in NSCT domain
Zhang, Mengqun; Zhang, Wei; He, Xinyu
2017-11-01
In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.
DEFF Research Database (Denmark)
Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.
2013-01-01
This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin
2017-11-01
To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.
Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder
Directory of Open Access Journals (Sweden)
Detian Huang
2018-01-01
Full Text Available Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.
Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform
Directory of Open Access Journals (Sweden)
Llopis Rafael Peset
2006-01-01
Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.
Energy Technology Data Exchange (ETDEWEB)
Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)
2014-02-18
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.
International Nuclear Information System (INIS)
Fan, Chengguang; Drinkwater, Bruce W.
2014-01-01
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
High-resolution studies of the structure of the solar atmosphere using a new imaging algorithm
Karovska, Margarita; Habbal, Shadia Rifai
1991-01-01
The results of the application of a new image restoration algorithm developed by Ayers and Dainty (1988) to the multiwavelength EUV/Skylab observations of the solar atmosphere are presented. The application of the algorithm makes it possible to reach a resolution better than 5 arcsec, and thus study the structure of the quiet sun on that spatial scale. The results show evidence for discrete looplike structures in the network boundary, 5-10 arcsec in size, at temperatures of 100,000 K.
ON SOME TERMS DENOTING CREW MEMBERS ON DUBROVNIK SHIPS
Directory of Open Access Journals (Sweden)
Ariana Violić-Koprivec
2015-01-01
Full Text Available The paper discusses selected terms denoting crew members on Dubrovnik ships throughout the history. The titles of the most important crew members are analyzed based on the corpus of the 18th century documents, literary works, and technical literature. The goal is to determine which terms are typical of the Dubrovnik area, whether their meanings have become restricted or extended, and how they have disappeared or remained in use over the centuries. It is obvious that the importance of individual crew members and their positions changed with time. Their responsibilities occasionally overlapped, and certain terms for their positions coexisted as synonyms, either belonging to the standard or local, i.e. colloquial use. A comparative analysis has revealed some specific features of the Dubrovnik maritime terminology referring to the ship’s crew. The terms škrivan, nokjer, nostromo, pilot, gvardijan and dispensjer are lexemes specific for this area. This is confirmed by their use in literary works.
A research of road centerline extraction algorithm from high resolution remote sensing images
Zhang, Yushan; Xu, Tingfa
2017-09-01
Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.
Action Algebras and Model Algebras in Denotational Semantics
Guedes, Luiz Carlos Castro; Haeusler, Edward Hermann
This article describes some results concerning the conceptual separation of model dependent and language inherent aspects in a denotational semantics of a programming language. Before going into the technical explanation, the authors wish to relate a story that illustrates how correctly and precisely posed questions can influence the direction of research. By means of his questions, Professor Mosses aided the PhD research of one of the authors of this article and taught the other, who at the time was a novice supervisor, the real meaning of careful PhD supervision. The student’s research had been partially developed towards the implementation of programming languages through denotational semantics specification, and the student had developed a prototype [12] that compared relatively well to some industrial compilers of the PASCAL language. During a visit to the BRICS lab in Aarhus, the student’s supervisor gave Professor Mosses a draft of an article describing the prototype and its implementation experiments. The next day, Professor Mosses asked the supervisor, “Why is the generated code so efficient when compared to that generated by an industrial compiler?” and “You claim that the efficiency is simply a consequence of the Object- Orientation mechanisms used by the prototype programming language (C++); this should be better investigated. Pay more attention to the class of programs that might have this good comparison profile.” As a result of these aptly chosen questions and comments, the student and supervisor made great strides in the subsequent research; the advice provided by Professor Mosses made them perceive that the code generated for certain semantic domains was efficient because it mapped to the “right aspect” of the language semantics. (Certain functional types, used to represent mappings such as Stores and Environments, were pushed to the level of the object language (as in gcc). This had the side-effect of generating code for arrays in
Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm
Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.
2018-03-01
X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.
Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing
2014-04-01
Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.
Directory of Open Access Journals (Sweden)
K. Parvathi
2009-01-01
Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.
Directory of Open Access Journals (Sweden)
Gustavo Sanchez
2012-01-01
Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.
International Nuclear Information System (INIS)
Gao Zhi; Shen Yi-Qing
2012-01-01
The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme
Directory of Open Access Journals (Sweden)
Javier Marcello
2016-09-01
Full Text Available The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas. In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%. Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations
International Nuclear Information System (INIS)
Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.
2005-01-01
In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)
Meerts, W.L.; Schmitt, M.
2006-01-01
This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its
Energy Technology Data Exchange (ETDEWEB)
Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others
2013-02-15
Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.
Directory of Open Access Journals (Sweden)
Andrej Bugajev
2018-01-01
Full Text Available In this article, the modelling of the judicial conflict-resolution process is considered from a construction investor’s point of view. Such modelling is important for improving the risk management for construction investors and supporting sustainable city development by supporting the development of rules regulating the construction process. Thus, this raises the problem of evaluation of different decisions and selection of the optimal one followed by distribution extraction. First, the example of such a process is analysed and schematically represented. Then, it is formalised as a graph, which is described in the form of a decision graph with cycles. We use some natural problem properties and provide the algorithm to convert this graph into a tree. Then, we propose the algorithm to evaluate profits for different scenarios with estimation of time, which is done by integration of an average daily costs function. Afterwards, the optimisation problem is solved and the optimal investor strategy is obtained—this allows one to extract the construction project profit distribution, which can be used for further analysis by standard risk (and other important information-evaluation techniques. The overall algorithm complexity is analysed, the computational experiment is performed and conclusions are formulated.
Directory of Open Access Journals (Sweden)
Chong Fan
2017-03-01
Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.
A Denotational Account of Untyped Normalization by Evaluation
DEFF Research Database (Denmark)
Filinski, Andrzej; Rohde, Henning Korsholm
2004-01-01
We show that the standard normalization-by-evaluation construction for the simply-typed λβγ-calculus has a natural counterpart for the untyped λβ-calculus, with the central type-indexed logical relation replaced by a recursively defined invariant relation, in the style of Pitts. In fact, the cons...... proof for the normalization algorithm, expressed as a functional program in an ML-like call-by-value language. A version of this article with detailed proofs is available as a technical report [5]....
MO-FG-204-05: Evaluation of a Novel Algorithm for Improved 4DCT Resolution
Energy Technology Data Exchange (ETDEWEB)
Glide-Hurst, C; Briceno, J; Chetty, I. J. [Henry Ford Health System, Detroit, MI (United States); Klahr, P [Philips Healthcare, Highland Heights, OH (United States)
2015-06-15
Purpose: Accurate tumor motion characterization is critical for increasing the therapeutic ratio of radiation therapy. To accommodate the divergent fan-beam geometry of the scanner, the current 4D-CT algorithm utilizes a larger temporal window to ensure that pixel values are valid throughout the entire FOV. To minimize the impact on temporal resolution, a cos{sup 2} weighting is employed. We propose a novel exponential weighting (“exponential”) 4DCT reconstruction algorithm that has a sharper slope and provides a more optimal temporal resolution. Methods: A respiratory motion platform translated a lung-mimicking Styrofoam slab with several high and low-contrast inserts 2 cm in the superior-inferior direction. Breathing rates (10–15 bpm) and couch pitch (0.06–0.1 A.U.) were varied to assess interplay between parameters. Multi-slice helical 4DCTs were acquired with 0.5 sec gantry rotation and data were reconstructed with cos{sup 2} and exponential weighting. Mean and standard deviation were calculated via region of interest analysis. Intensity profiles evaluated object boundaries. Retrospective raw data reconstructions were performed for both 4DCT algorithms for 3 liver and lung cancer patients. Image quality (temporal blurring/sharpness) and subtraction images were compared between reconstructions. Results: In the phantom, profile analysis revealed that sharper boundaries were obtained with exponential reconstructions at transitioning breathing phases (i.e. mid-inhale or mid-exhale). Reductions in full-width half maximum were ∼1 mm in the superior-inferior direction and appreciable sharpness could be observed in difference maps. This reduction also yielded a slight reduction in target volume between reconstruction algorithms. For patient cases, coronal views showed less blurring at object boundaries and local intensity differences near the tumor and diaphragm with exponential weighted reconstruction. Conclusion: Exponential weighted 4DCT offers potential
A high-resolution neutron spectra unfolding method using the Genetic Algorithm technique
Mukherjee, B
2002-01-01
The Bonner sphere spectrometers (BSS) are commonly used to determine the neutron spectra within various nuclear facilities. Sophisticated mathematical tools are used to unfold the neutron energy distribution from the output data of the BSS. This paper highlights a novel high-resolution neutron spectra-unfolding method using the Genetic Algorithm (GA) technique. The GA imitates the biological evolution process prevailing in the nature to solve complex optimisation problems. The GA method was utilised to evaluate the neutron energy distribution, average energy, fluence and equivalent dose rates at important work places of a DIDO class research reactor and a high-energy superconducting heavy ion cyclotron. The spectrometer was calibrated with a sup 2 sup 4 sup 1 Am/Be (alpha,n) neutron standard source. The results of the GA method agreed satisfactorily with the results obtained by using the well-known BUNKI neutron spectra unfolding code.
International Nuclear Information System (INIS)
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior
Directory of Open Access Journals (Sweden)
T. Kim
2012-09-01
Full Text Available Automated generation of digital elevation models (DEMs from high resolution satellite images (HRSIs has been an active research topic for many years. However, stereo matching of HRSIs, in particular based on image-space search, is still difficult due to occlusions and building facades within them. Object-space matching schemes, proposed to overcome these problem, often are very time consuming and critical to the dimensions of voxels. In this paper, we tried a new least square matching (LSM algorithm that works in a 3D object space. The algorithm starts with an initial height value on one location of the object space. From this 3D point, the left and right image points are projected. The true height is calculated by iterative least squares estimation based on the grey level differences between the left and right patches centred on the projected left and right points. We tested the 3D LSM to the Worldview images over 'Terrassa Sud' provided by the ISPRS WG I/4. We also compared the performance of the 3D LSM with the correlation matching based on 2D image space and the correlation matching based on 3D object space. The accuracy of the DEM from each method was analysed against the ground truth. Test results showed that 3D LSM offers more accurate DEMs over the conventional matching algorithms. Results also showed that 3D LSM is sensitive to the accuracy of initial height value to start the estimation. We combined the 3D COM and 3D LSM for accurate and robust DEM generation from HRSIs. The major contribution of this paper is that we proposed and validated that LSM can be applied to object space and that the combination of 3D correlation and 3D LSM can be a good solution for automated DEM generation from HRSIs.
International Nuclear Information System (INIS)
Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua
2015-01-01
We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)
ELM: an Algorithm to Estimate the Alpha Abundance from Low-resolution Spectra
Bu, Yude; Zhao, Gang; Pan, Jingchang; Bharat Kumar, Yerra
2016-01-01
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
ELM: AN ALGORITHM TO ESTIMATE THE ALPHA ABUNDANCE FROM LOW-RESOLUTION SPECTRA
International Nuclear Information System (INIS)
Bu, Yude; Zhao, Gang; Kumar, Yerra Bharat; Pan, Jingchang
2016-01-01
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods
ELM: AN ALGORITHM TO ESTIMATE THE ALPHA ABUNDANCE FROM LOW-RESOLUTION SPECTRA
Energy Technology Data Exchange (ETDEWEB)
Bu, Yude [School of Mathematics and Statistics, Shandong University, Weihai, 264209, Shandong (China); Zhao, Gang; Kumar, Yerra Bharat [Key Laboratory for Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100012 (China); Pan, Jingchang, E-mail: ydbu@bao.ac.cn, E-mail: gzhao@nao.cas.cn [School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong (China)
2016-01-20
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Ly, Canh
2004-08-01
Scan-MUSIC algorithm, developed by the U.S. Army Research Laboratory (ARL), improves angular resolution for target detection with the use of a single rotatable radar scanning the angular region of interest. This algorithm has been adapted and extended from the MUSIC algorithm that has been used for a linear sensor array. Previously, it was shown that the SMUSIC algorithm and a Millimeter Wave radar can be used to resolve two closely spaced point targets that exhibited constructive interference, but not for the targets that exhibited destructive interference. Therefore, there were some limitations of the algorithm for the point targets. In this paper, the SMUSIC algorithm is applied to a problem of resolving real complex scatterer-type targets, which is more useful and of greater practical interest, particular for the future Army radar system. The paper presents results of the angular resolution of the targets, an M60 tank and an M113 Armored Personnel Carrier (APC), that are within the mainlobe of a Κα-band radar antenna. In particular, we applied the algorithm to resolve centroids of the targets that were placed within the beamwidth of the antenna. The collected coherent data using the stepped-frequency radar were compute magnitudely for the SMUSIC calculation. Even though there were significantly different signal returns for different orientations and offsets of the two targets, we resolved those two target centroids when they were as close as about 1/3 of the antenna beamwidth.
Li, C.; Zhou, X.; Tang, D.; Zhu, Z.
2018-04-01
Resolution and sidelobe are mutual restrict for SAR image. Usually sidelobe suppression is based on resolution reduction. This paper provide a method for resolution enchancement using sidelobe opposition speciality of hanning window and SAR image. The method can keep high resolution on the condition of sidelobe suppression. Compare to traditional method, this method can enchance 50 % resolution when sidelobe is -30dB.
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Directory of Open Access Journals (Sweden)
N. C. Wright
2018-04-01
Full Text Available Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Wright, Nicholas C.; Polashenski, Chris M.
2018-04-01
Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
10 CFR 1703.102 - Definitions; words denoting number, gender and tense.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Definitions; words denoting number, gender and tense. 1703... § 1703.102 Definitions; words denoting number, gender and tense. Agency record is a record in the possession and control of the Board that is associated with Board business. Agency records do not include...
What's in a Name? Denotation, Connotation, and "A Boy Named Sue"
Lawton, Bessie
2011-01-01
Language choice--specifically word choice--is an important topic on a basic communication or public speaking course. One sub-topic under "Language" involves understanding the difference between denotation and connotation. Denotation refers to a word's definition, while connotation refers to the emotions associated with the word. Speakers need to…
18 CFR 1.102 - Words denoting number, gender and so forth.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Words denoting number... Rules of Construction § 1.102 Words denoting number, gender and so forth. In determining the meaning of...) Words of one gender include the other gender. [Order 225, 47 FR 19022, May 3, 1982] ...
Directory of Open Access Journals (Sweden)
C. Li
2018-04-01
Full Text Available Resolution and sidelobe are mutual restrict for SAR image. Usually sidelobe suppression is based on resolution reduction. This paper provide a method for resolution enchancement using sidelobe opposition speciality of hanning window and SAR image. The method can keep high resolution on the condition of sidelobe suppression. Compare to traditional method, this method can enchance 50 % resolution when sidelobe is −30dB.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07
Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.
2010-12-01
Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively
International Nuclear Information System (INIS)
Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.
1996-01-01
We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly
1981-06-01
span required or allowed for each task in a single scan is outlined in Table 3-1. The executive program controls the initiation and termination of each...by-step manner throughout the ATARS process. At the same time, the executive program controls and determines when each task is ready to accept the next...AD-AI04 147 MITRE CORP MCLEAN VA METREK UI V ’ 1AUTOMATIC TRAFFIC AUVISORY AND RESOLUTION SERVICE (ATARS) ALGOR--ETC(U) JUN a R H LENTZ. W D LOVE, T L
Létourneau, Pierre-David
2016-09-19
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.
Directory of Open Access Journals (Sweden)
Jean-François Mahfouf
2012-06-01
Full Text Available The performance of a new data assimilation algorithm called back and forth nudging (BFN is evaluated using a high-resolution numerical mesoscale model and simulated wind observations in the boundary layer. This new algorithm, of interest for the assimilation of high-frequency observations provided by ground-based active remote-sensing instruments, is straightforward to implement in a realistic atmospheric model. The convergence towards a steady-state profile can be achieved after five iterations of the BFN algorithm, and the algorithm provides an improved solution with respect to direct nudging. It is shown that the contribution of the nudging term does not dominate over other model physical and dynamical tendencies. Moreover, by running backward integrations with an adiabatic version of the model, the nudging coefficients do not need to be increased in order to stabilise the numerical equations. The ability of BFN to produce model changes upstream from the observations, in a similar way to 4-D-Var assimilation systems, is demonstrated. The capacity of the model to adjust to rapid changes in wind direction with the BFN is a first encouraging step, for example, to improve the detection and prediction of low-level wind shear phenomena through high-resolution mesoscale modelling over airports.
A Fast Algorithm for Image Super-Resolution from Blurred Observations
Directory of Open Access Journals (Sweden)
Ng Michael K
2006-01-01
Full Text Available We study the problem of reconstruction of a high-resolution image from several blurred low-resolution image frames. The image frames consist of blurred, decimated, and noisy versions of a high-resolution image. The high-resolution image is modeled as a Markov random field (MRF, and a maximum a posteriori (MAP estimation technique is used for the restoration. We show that with the periodic boundary condition, a high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore high-resolution images in the aperiodic boundary condition. Computer simulations are given to illustrate the effectiveness of the proposed approach.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Studies of high resolution array processing algorithms for multibeam bathymetric applications
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.; Schenke, H.W.
In this paper a study is initiated to observe the usefulness of directional spectral estimation techniques for underwater bathymetric applications. High resolution techniques like the Maximum Likelihood (ML) method and the Maximum Entropy (ME...
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Directory of Open Access Journals (Sweden)
N. A. Petrov
2014-01-01
Full Text Available The paper outlines the formulation and solution of the problem of an airplane trajectory control within dynamically changing flight conditions. Physical and mathematical formulation of the problem was justified and algorithms were proposed to solve it using modern automated technologies.
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
Lakshmi, V.; Mladenova, I. E.; Narayan, U.
2009-12-01
Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks
Development of Signal Processing Algorithms for High Resolution Airborne Millimeter Wave FMCW SAR
Meta, A.; Hoogeboom, P.
2005-01-01
For airborne earth observation applications, there is a special interest in lightweight, cost effective, imaging sensors of high resolution. The combination of Frequency Modulated Continuous Wave (FMCW) technology and Synthetic Aperture Radar (SAR) techniques can lead to such a sensor. In this
Directory of Open Access Journals (Sweden)
N. Shahzad
2013-01-01
Full Text Available In 1994, Matthews introduced the notion of partial metric space with the aim of providing a quantitative mathematical model suitable for program verification. Concretely, Matthews proved a partial metric version of the celebrated Banach fixed point theorem which has become an appropriate quantitative fixed point technique to capture the meaning of recursive denotational specifications in programming languages. In this paper we show that a few assumptions in statement of Matthews fixed point theorem can be relaxed in order to provide a quantitative fixed point technique useful to analyze the meaning of the aforementioned recursive denotational specifications in programming languages. In particular, we prove a new fixed point theorem for self-mappings between partial metric spaces in which the completeness has been replaced by 0-completeness and the contractive condition has been weakened in such a way that the new one best fits the requirements of practical problems in denotational semantics. Moreover, we provide examples that show that the hypothesis in the statement of our new result cannot be weakened. Finally, we show the potential applicability of the developed theory by means of analyzing a few concrete recursive denotational specifications, some of them admitting a unique meaning and others supporting multiple ones.
Pascal Semantics by a Combination of Denotational Semantics and High-level Petri Nets
DEFF Research Database (Denmark)
Jensen, Kurt; Schmidt, Erik Meineche
1986-01-01
This paper describes the formal semantics of a subset of PASCAL, by means of a semantic model based on a combination of denotational semantics and high-level Petri nets. It is our intention that the paper can be used as part of the written material for an introductory course in computer science....
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
International Nuclear Information System (INIS)
Pan Jun-Yang; Xie Yi
2015-01-01
With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model. (research papers)
Semantic motivation for the denotational identity of arguments in predication structures
Directory of Open Access Journals (Sweden)
Viara Maldjieva
2015-11-01
Full Text Available Semantic motivation for the denotational identity of arguments in predication structures This text is an attempt at a preliminary outline of the factors that motivate the denotational identity of argument content in the predication structure as well as the consequences of this identity for the shape of the sentence expression which is a realization of such a structure. The first question this analysis attempts to answer concerns the structure of predicative concepts that constitute the predication structure with arguments of the identical content? The second question the cursory analysis done attempts to answer concerns the manner, in which the identity existing on the semantic structure level is signaled on the surface, in the formal structure.
Ramage, J. M.; Brodzik, M. J.; Hardman, M.; Troy, T. J.
2017-12-01
Snow is a vital part of the terrestrial hydrological cycle, a crucial resource for people and ecosystems. In mountainous regions snow is extensive, variable, and challenging to document. Snow melt timing and duration are important factors affecting the transfer of snow mass to soil moisture and runoff. Passive microwave brightness temperature (Tb) changes at 36 and 18 GHz are a sensitive way to detect snow melt onset due to their sensitivity to the abrupt change in emissivity. They are widely used on large icefields and high latitude watersheds. The coarse resolution ( 25 km) of historically available data has precluded effective use in high relief, heterogeneous regions, and gaps between swaths also create temporal data gaps at lower latitudes. New enhanced resolution data products generated from a scatterometer image reconstruction for radiometer (rSIR) technique are available at the original frequencies. We use these Calibrated Enhanced-resolution Brightness (CETB) Temperatures Earth System Data Records (ESDR) to evaluate existing snow melt detection algorithms that have been used in other environments, including the cross polarized gradient ratio (XPGR) and the diurnal amplitude variations (DAV) approaches. We use the 36/37 GHz (3.125 km resolution) and 18/19 GHz (6.25 km resolution) vertically and horizontally polarized datasets from the Special Sensor Microwave Imager (SSM/I) and Advanced Microwave Radiometer for EOS (AMSR-E) and evaluate them for use in this high relief environment. The new data are used to assess glacier and snow melt records in the Hunza River Basin [area 13,000 sq. km, located at 36N, 74E], a tributary to the Upper Indus Basin, Pakistan. We compare the melt timing results visually and quantitatively to the corresponding EASE-Grid 2.0 25-km dataset, SRTM topography, and surface temperatures from station and reanalysis data. The new dataset is coarser than the topography, but is able to differentiate signals of melt/refreeze timing for
Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen
2016-04-01
Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.
Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi
2016-10-10
In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106 s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.
Directory of Open Access Journals (Sweden)
Byongjun Hwang
2017-07-01
Full Text Available In this study, we present an algorithm for summer sea ice conditions that semi-automatically produces the floe size distribution of Arctic sea ice from high-resolution satellite Synthetic Aperture Radar data. Currently, floe size distribution data from satellite images are very rare in the literature, mainly due to the lack of a reliable algorithm to produce such data. Here, we developed the algorithm by combining various image analysis methods, including Kernel Graph Cuts, distance transformation and watershed transformation, and a rule-based boundary revalidation. The developed algorithm has been validated against the ground truth that was extracted manually with the aid of 1-m resolution visible satellite data. Comprehensive validation analysis has shown both perspectives and limitations. The algorithm tends to fail to detect small floes (mostly less than 100 m in mean caliper diameter compared to ground truth, which is mainly due to limitations in water-ice segmentation. Some variability in the power law exponent of floe size distribution is observed due to the effects of control parameters in the process of de-noising, Kernel Graph Cuts segmentation, thresholds for boundary revalidation and image resolution. Nonetheless, the algorithm, for floes larger than 100 m, has shown a reasonable agreement with ground truth under various selections of these control parameters. Considering that the coverage and spatial resolution of satellite Synthetic Aperture Radar data have increased significantly in recent years, the developed algorithm opens a new possibility to produce large volumes of floe size distribution data, which is essential for improving our understanding and prediction of the Arctic sea ice cover
Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S
2018-02-01
Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to
Directory of Open Access Journals (Sweden)
Jiaye Li
2018-04-01
Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.
2017-10-01
A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes
1981-06-01
resolution advisory data set; ZZITIP (all. resolution advisory data sets processed); ZIP (TRADS.IEI? 12 ST292) UI!j (VMD(TRADS.INDZI3) 9T ISEP -2) 111TRIDS.FE...PROCES featurePSBPGESP2; ZL. (NIXSEPS. NAMKSPO, SMPS, SZP20); ISEPS =0; RAISMP 0; LOOP; Get the next BADS; rYITI’ (all RIDS processed); U...THIDS.PEATBITS(l) 12 ST202) In~ (TRADS..PUJTBITS(2) X2 $TRE) J (TRADS.MTITS(3) f& STU)) :MLV I (TRIDS.SIYGLE .1 SU) THlEY ISEPS = NIQIAIS!PS.PSZP2 (TRIOS
Directory of Open Access Journals (Sweden)
Wenzhu Huang
2015-04-01
Full Text Available Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs. However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs. The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs’ reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method.
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Lé tourneau, Pierre-David; Wu, Ying; Papanicolaou, George; Garnier, Josselin; Darve, Eric
2016-01-01
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we
Yamamoto, M; Hayata, I; Furuta, S
1992-03-01
Since 1989 we have promoted a project to develop an automated scoring system of radiation induced chromosome aberrations. As a first step, a high resolution image processing system for study purposes, NIRS-1000:CHROMO STUDY, has been developed. It is composed of: (1) CHROMO MARKER whose main purpose is to mark on images to make image data base, (2) CHROMO ALGO whose purpose is algorithm development, and (3) METAPHASE RANKER whose purposes are metaphase finding and ranking with a high power objective lens. However, METAPHASE RANKER is presently under development. The system utilizes a high definition video system so as to realize the best spatial resolution that is achievable with an optical microscope using an objective lens (x 100, numerical aperture 1.4). The video camera has 1024 effective scan lines to realize 0.1 microns sampling on a specimen. The system resolution achieved on the hard copy is less than 0.3 microns on a specimen. A preliminary algorithm has been developed to classify the aberrations on the system using projection information of gray level. The preliminary test results on excellent 10 metaphases show that the correct classification ratio is 92.7%, that the detection rate of the aberrations is 83.3% and that the false positive rate is 6.1%.
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
DEFF Research Database (Denmark)
Danvy, Olivier
2009-01-01
We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fus...
Chemyakin, Eduard; Müller, Detlef; Burton, Sharon; Kolgotin, Alexei; Hostetler, Chris; Ferrare, Richard
2014-11-01
We present the results of a feasibility study in which a simple, automated, and unsupervised algorithm, which we call the arrange and average algorithm, is used to infer microphysical parameters (complex refractive index, effective radius, total number, surface area, and volume concentrations) of atmospheric aerosol particles. The algorithm uses backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm as input information. Testing of the algorithm is based on synthetic optical data that are computed from prescribed monomodal particle size distributions and complex refractive indices that describe spherical, primarily fine mode pollution particles. We tested the performance of the algorithm for the "3 backscatter (β)+2 extinction (α)" configuration of a multiwavelength aerosol high-spectral-resolution lidar (HSRL) or Raman lidar. We investigated the degree to which the microphysical results retrieved by this algorithm depends on the number of input backscatter and extinction coefficients. For example, we tested "3β+1α," "2β+1α," and "3β" lidar configurations. This arrange and average algorithm can be used in two ways. First, it can be applied for quick data processing of experimental data acquired with lidar. Fast automated retrievals of microphysical particle properties are needed in view of the enormous amount of data that can be acquired by the NASA Langley Research Center's airborne "3β+2α" High-Spectral-Resolution Lidar (HSRL-2). It would prove useful for the growing number of ground-based multiwavelength lidar networks, and it would provide an option for analyzing the vast amount of optical data acquired with a future spaceborne multiwavelength lidar. The second potential application is to improve the microphysical particle characterization with our existing inversion algorithm that uses Tikhonov's inversion with regularization. This advanced algorithm has recently undergone development to allow automated and
W. Wang; J.J. Qu; X. Hao; Y. Liu
2009-01-01
In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...
International Nuclear Information System (INIS)
Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias
2003-01-01
The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)
International Nuclear Information System (INIS)
Wang, Qi; Wang, Huaxiang; Xin, Shan
2011-01-01
The flow regimes are important characteristics to describe two-phase flows, and measurement of two-phase flow parameters is becoming increasingly important in many industrial processes. Computerized tomography (CT) has been applied to two-phase/multi-phase flow measurement in recent years. Image reconstruction of CT often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive, especially for the case of online flow regime identification. In this paper, minimum cross entropy reconstruction based on multi-resolution processing (MRMCE) is presented for oil–gas two-phase flow regime identification. A regularized MCE solution is obtained using the simultaneous multiplicative algebraic reconstruction technique (SMART) at a coarse resolution level, where important information on the reconstructed image is contained. Then, the solution in the finest resolution is obtained by inverse fast wavelet transformation. Both computer simulation and static/dynamic experiments were carried out for typical flow regimes. Results obtained indicate that the proposed method can dramatically reduce the computational time and improve the quality of the reconstructed image with suitable decomposition levels compared with the single-resolution maximum likelihood expectation maximization (MLEM), alternating minimization (AM), Landweber, iterative least square technique (ILST) and minimum cross entropy (MCE) methods. Therefore, the MRMCE method is suitable for identification of dynamic two-phase flow regimes
Directory of Open Access Journals (Sweden)
Tingting Jin
2017-04-01
Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.
Duan, Limin; Fan, Keke; Li, Wei; Liu, Tingxi
2017-12-01
Daily precipitation data from 42 stations in Inner Mongolia, China for the 10 years period from 1 January 2001 to 31 December 2010 was utilized along with downscaled data from the Tropical Rainfall Measuring Mission (TRMM) with a spatial resolution of 0.25° × 0.25° for the same period based on the statistical relationships between the normalized difference vegetation index (NDVI), meteorological variables, and digital elevation models (https://en.wikipedia.org/wiki/Digital_elevation_model) (DEM) using the leave-one-out (LOO) cross validation method and multivariate step regression. The results indicate that (1) TRMM data can indeed be used to estimate annual precipitation in Inner Mongolia and there is a linear relationship between annual TRMM and observed precipitation; (2) there is a significant relationship between TRMM-based precipitation and predicted precipitation, with a spatial resolution of 0.50° × 0.50°; (3) NDVI and temperature are important factors influencing the downscaling of TRMM precipitation data for DEM and the slope is not the most significant factor affecting the downscaled TRMM data; and (4) the downscaled TRMM data reflects spatial patterns in annual precipitation reasonably well, showing less precipitation falling in west Inner Mongolia and more in the south and southeast. The new approach proposed here provides a useful alternative for evaluating spatial patterns in precipitation and can thus be applied to generate a more accurate precipitation dataset to support both irrigation management and the conservation of this fragile grassland ecosystem.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Kasumov, Takhar; Ilchenko, Serguey; Li, Ling; Rachdaoui, Nadia; Sadygov, Rovshan G; Willard, Belinda; McCullough, Arthur J; Previs, Stephen
2011-05-01
We recently developed a method for estimating protein dynamics in vivo with heavy water ((2)H(2)O) using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) [16], and we confirmed that (2)H labeling of many hepatic free amino acids rapidly equilibrated with body water. Although this is a reliable method, it required modest sample purification and necessitated the determination of tissue-specific amino acid labeling. Another approach for quantifying protein kinetics is to measure the (2)H enrichments of body water (precursor) and protein-bound amino acid or proteolytic peptide (product) and to estimate how many copies of deuterium are incorporated into a product. In the current study, we used nanospray linear trap Fourier transform ion cyclotron resonance mass spectrometry (LTQ FT-ICR MS) to simultaneously measure the isotopic enrichment of peptides and protein-bound amino acids. A mathematical algorithm was developed to aid the data processing. The most notable improvement centers on the fact that the precursor/product labeling ratio can be obtained by measuring the labeling of water and a protein (or peptide) of interest, thereby minimizing the need to measure the amino acid labeling. As a proof of principle, we demonstrate that this approach can detect the effect of nutritional status on albumin synthesis in rats given (2)H(2)O. Copyright © 2011 Elsevier Inc. All rights reserved.
Atoche, Alejandro Castillo; Castillo, Javier Vázquez
2012-01-01
A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Lavine, B K; Ritter, J; Moores, A J; Wilson, M; Faruque, A; Mayfield, H T
2000-01-15
Solid-phase microextraction (SPME), capillary column gas chromatography, and pattern recognition methods were used to develop a potential method for typing jet fuels so a spill sample in the environment can be traced to its source. The test data consisted of gas chromatograms from 180 neat jet fuel samples representing common aviation turbine fuels found in the United States (JP-4, Jet-A, JP-7, JPTS, JP-5, JP-8). SPME sampling of the fuel's headspace afforded well-resolved reproducible profiles, which were standardized using special peak-matching software. The peak-matching procedure yielded 84 standardized retention time windows, though not all peaks were present in all gas chromatograms. A genetic algorithm (GA) was employed to identify features (in the standardized chromatograms of the neat jet fuels) suitable for pattern recognition analysis. The GA selected peaks, whose two largest principal components showed clustering of the chromatograms on the basis of fuel type. The principal component analysis routine in the fitness function of the GA acted as an information filter, significantly reducing the size of the search space, since it restricted the search to feature subsets whose variance is primarily about differences between the various fuel types in the training set. In addition, the GA focused on those classes and/or samples that were difficult to classify as it trained using a form of boosting. Samples that consistently classify correctly were not as heavily weighted as samples that were difficult to classify. Over time, the GA learned its optimal parameters in a manner similar to a perceptron. The pattern recognition GA integrated aspects of strong and weak learning to yield a "smart" one-pass procedure for feature selection.
Directory of Open Access Journals (Sweden)
Hassan Mohamed
2018-05-01
Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.
高效超分辨波达方向估计算法综述%Overview of efficient algorithms for super-resolution DOA estimates
Institute of Scientific and Technical Information of China (English)
闫锋刚; 沈毅; 刘帅; 金铭; 乔晓林
2015-01-01
Computationally efficient methods for super-resolution direction of arrival (DOA)estimation aim to reduce the complexity of conventional techniques,to economize on the costs of systems and to enhance the ro-bustness of DOA estimators against array geometries and other environmental restrictions,which has been an important topic in the field.According to the theory and elements of the multiple signal classification (MUSIC) algorithm and the primary derivations from MUSIC,state-of-the-art efficient super-resolution DOA estimators are classified into five different types.These five types of approaches reduce the complexity by real-valued com-putation,beam-space transformation,fast subspace estimation,rapid spectral search,and no spectral search, respectively.With such a classification,comprehensive overviews of each kind of efficient methods are given and numerical comparisons among these estimators are also conducted to illustrate their advantages.Future develop-ment trends of efficient algorithms for super-resolution DOA estimates are finally predicted with basic require-ments of real-world applications.%高效超分辨波达方向估计算法致力于降低超分辨算法的计算量、节约系统的实现成本、弱化算法对于阵列结构的依赖性，是推进超分辨理论工程化的一个重要研究课题。从多重信号分类（multiple signal classifi-cation，MUSIC）算法的原理和构成要素入手，以基于 MUSIC 派生高效超分辨算法的目的和方法为标准，将现存高效超分辨算法划分为实值运算、波束域变换、快速子空间估计、快速峰值搜索和免峰值搜索5大类。在此基础上，全面回顾总结了各类高效算法的发展历程和最新进展，对比分析了它们的主要优缺点。最后，结合空间谱估计实际工程化的应用需求，指出了高效超分辨算法的未来发展趋势。
International Nuclear Information System (INIS)
Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M
2009-01-01
A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six
Nghiem, S. V.; Brakenridge, G. R.; Nguyen, D. T.
2017-12-01
Hurricane Harvey inflicted historical catastrophic flooding across extensive regions around Houston and southeast Texas after making landfall on 25 August 2017. The Federal Emergency Management Agency (FEMA) requested urgent supports for flood mapping and monitoring in an emergency response to the extreme flood situation. An innovative satellite remote sensing method, called the Depolarization Reduction Algorithm for Global Observations of inundatioN (DRAGON), has been developed and implemented for use with Sentinel synthetic aperture radar (SAR) satellite data at a resolution of 10 meters to identify, map, and monitor inundation including pre-existing water bodies and newly flooded areas. Results from this new method are hydrologically consistent and have been verified with known surface waters (e.g., coastal ocean, rivers, lakes, reservoirs, etc.), with clear-sky high-resolution WorldView images (where waves can be seen on surface water in inundated areas within a small spatial coverage), and with other flood maps from the consortium of Global Flood Partnership derived from multiple satellite datasets (including clear-sky Landsat and MODIS at lower resolutions). Figure 1 is a high-resolution (4K UHD) image of a composite inundation map for the region around Rosharon (in Brazoria County, south of Houston, Texas). This composite inundation map reveals extensive flooding on 29 August 2017 (four days after Hurricane Harvey made landfall), and the inundation was still persistent in most of the west and south of Rosharon one week later (5 September 2017) while flooding was reduced in the east of Rosharon. Hurricane Irma brought flooding to a number of areas in Florida. As of 10 September 2017, Sentinel SAR flood maps reveal inundation in the Florida Panhandle and over lowland surfaces on several islands in the Florida Keys. However, Sentinel SAR results indicate that flooding along the Florida coast was not extreme despite Irma was a Category-5 hurricane that might
DEFF Research Database (Denmark)
Danvy, Olivier
2008-01-01
We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fuse...... its transition function with its driver loop, obtaining the functional implementation of a big-step abstract machine; (2) we adjust this big-step abstract machine so that it is in defunctionalized form, obtaining the functional implementation of a second big-step abstract machine; (3) we...... refunctionalize this adjusted abstract machine, obtaining the functional implementation of a natural semantics in continuation style; and (4) we closure-unconvert this natural semantics, obtaining a compositional continuation-passing evaluation function which we identify as the functional implementation...
Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.
2017-12-01
A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
Directory of Open Access Journals (Sweden)
E. Tamimi
2017-09-01
Full Text Available Automatic building detection from High Spatial Resolution (HSR images is one of the most important issues in Remote Sensing (RS. Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object. These showed the superiority of the proposed method in terms of time and accuracy.
Directory of Open Access Journals (Sweden)
Lei Shi
2018-01-01
Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.
Shi, Lei; Wan, Youchuan; Gao, Xianjun
2018-01-01
In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721
International Nuclear Information System (INIS)
Wennemers, Marloes; Bussink, Johan; Grebenchtchikov, Nicolai; Sweep, Fred C.G.J.; Span, Paul N.
2011-01-01
Background: Tribbles homolog 3 (TRIB3) is a pseudokinase involved in the regulation of several signaling pathways involved in cell survival and/or cell stress. Here, we determined the correlation between breast cancer prognosis and TRIB3 protein levels and established the role of TRIB3 in cell survival after hypoxia and/or radiotherapy. Material and methods: TRIB3 mRNA and protein were quantified in a new independent breast cancer patient cohort using QPCR and a new specific avian antibody against TRIB3. In addition, we used siRNA-mediated knockdown of TRIB3 in a colony-forming assay after hypoxia and radiotherapy. Results: TRIB3 mRNA and protein levels did not correlate in breast cancer cell lines or human breast cancer material. We validated our earlier finding that high TRIB3 mRNA denotes a poor prognosis, but found that high TRIB3 protein levels were associated with a good prognosis in breast cancer patients. We also show that knockdown of TRIB3 resulted in an increased survival under hypoxic conditions. Conclusion: Whereas mRNA levels of TRIB3 are related with a poor prognosis, TRIB3 protein is associated with a good prognosis in human breast cancer patients, possibly due to the fact that TRIB3 is involved in hypoxia tolerance.
International Nuclear Information System (INIS)
Moeslund, Jesper Erenskjold; Svenning, Jens-Christian; Boecher, Peder Klith; Moelhave, Thomas; Arge, Lars
2009-01-01
This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly at risk from flooding. We combine a new national Digital Elevation Model in very fine resolution (∼2 meter), a new highly computationally efficient flooding algorithm that accurately models the influence of barriers, and geospatial data on real-estate values to assess the economic real-estate risk posed by future sea-level rise to Aarhus. Under the A2 and A1FI (IPCC) climate scenarios we show that relatively large residential areas in the northern part of the city as well as areas around the river running through the city are likely to become flooded in the event of extreme, but realistic weather events. In addition, most of the large Aarhus harbour would also risk flooding. As much of the area at risk represent high-value real estate, it seems clear that proactive measures other than simple abandonment should be taken in order to avoid heavy economic losses. Among the different possibilities for dealing with an increased sea level, the strategic placement of flood-gates at key potential water-inflow routes and the construction or elevation of existing dikes seems to be the most convenient, most socially acceptable, and maybe also the cheapest solution. Finally, we suggest that high-detail flooding models similar to those produced in this study will become an important tool for a climate-change-integrated planning of future city development as well as for the development of evacuation plans.
Energy Technology Data Exchange (ETDEWEB)
Moeslund, Jesper Erenskjold; Svenning, Jens-Christian [Ecoinformatics and Biodiversity Group, Department of Biological Sciences, Aarhus University (Denmark); Boecher, Peder Klith [Department of Agroecology and Environment, Aarhus University (Denmark); Moelhave, Thomas; Arge, Lars, E-mail: jesper.moeslund@biology.au.d [MADALGO - Center for Massive Data Algorithmics, Aarhus University (Denmark)
2009-11-01
This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly at risk from flooding. We combine a new national Digital Elevation Model in very fine resolution ({approx}2 meter), a new highly computationally efficient flooding algorithm that accurately models the influence of barriers, and geospatial data on real-estate values to assess the economic real-estate risk posed by future sea-level rise to Aarhus. Under the A2 and A1FI (IPCC) climate scenarios we show that relatively large residential areas in the northern part of the city as well as areas around the river running through the city are likely to become flooded in the event of extreme, but realistic weather events. In addition, most of the large Aarhus harbour would also risk flooding. As much of the area at risk represent high-value real estate, it seems clear that proactive measures other than simple abandonment should be taken in order to avoid heavy economic losses. Among the different possibilities for dealing with an increased sea level, the strategic placement of flood-gates at key potential water-inflow routes and the construction or elevation of existing dikes seems to be the most convenient, most socially acceptable, and maybe also the cheapest solution. Finally, we suggest that high-detail flooding models similar to those produced in this study will become an important tool for a climate-change-integrated planning of future city development as well as for the development of evacuation plans.
Stamnes, S; Hostetler, C; Ferrare, R; Burton, S; Liu, X; Hair, J; Hu, Y; Wasilewski, A; Martin, W; van Diedenhoven, B; Chowdhary, J; Cetinić, I; Berg, L K; Stamnes, K; Cairns, B
2018-04-01
We present an optimal-estimation-based retrieval framework, the microphysical aerosol properties from polarimetry (MAPP) algorithm, designed for simultaneous retrieval of aerosol microphysical properties and ocean color bio-optical parameters using multi-angular total and polarized radiances. Polarimetric measurements from the airborne NASA Research Scanning Polarimeter (RSP) were inverted by MAPP to produce atmosphere and ocean products. The RSP MAPP results are compared with co-incident lidar measurements made by the NASA High-Spectral-Resolution Lidar HSRL-1 and HSRL-2 instruments. Comparisons are made of the aerosol optical depth (AOD) at 355 and 532 nm, lidar column-averaged measurements of the aerosol lidar ratio and Ångstrøm exponent, and lidar ocean measurements of the particulate hemispherical backscatter coefficient and the diffuse attenuation coefficient. The measurements were collected during the 2012 Two-Column Aerosol Project (TCAP) campaign and the 2014 Ship-Aircraft Bio-Optical Research (SABOR) campaign. For the SABOR campaign, 73% RSP MAPP retrievals fall within ±0.04 AOD at 532 nm as measured by HSRL-1, with an R value of 0.933 and root-mean-square deviation of 0.0372. For the TCAP campaign, 53% of RSP MAPP retrievals are within 0.04 AOD as measured by HSRL-2, with an R value of 0.927 and root-mean-square deviation of 0.0673. Comparisons with HSRL-2 AOD at 355 nm during TCAP result in an R value of 0.959 and a root-mean-square deviation of 0.0694. The RSP retrievals using the MAPP optimal estimation framework represent a key milestone on the path to a combined lidar + polarimeter retrieval using both HSRL and RSP measurements.
Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Mise, Yoko; Sumida, Kaoru; Abe, Osamu
2017-08-01
To compare image quality characteristics of high-resolution computed tomography (HRCT) in the evaluation of interstitial lung disease using three different reconstruction methods: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Eighty-nine consecutive patients with interstitial lung disease underwent standard-of-care chest CT with 64-row multi-detector CT. HRCT images were reconstructed in 0.625-mm contiguous axial slices using FBP, ASIR, and MBIR. Two radiologists independently assessed the images in a blinded manner for subjective image noise, streak artifacts, and visualization of normal and pathologic structures. Objective image noise was measured in the lung parenchyma. Spatial resolution was assessed by measuring the modulation transfer function (MTF). MBIR offered significantly lower objective image noise (22.24±4.53, PASIR (39.76±7.41) and FBP (51.91±9.71). MTF (spatial resolution) was increased using MBIR compared with ASIR and FBP. MBIR showed improvements in visualization of normal and pathologic structures over ASIR and FBP, while ASIR was rated quite similarly to FBP. MBIR significantly improved subjective image noise (PASIR and FBP. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Timothy Dube
2014-08-01
Full Text Available The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB and Random Forest (RF regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1. We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands.
Computational performance of a projection and rescaling algorithm
Pena, Javier; Soheili, Negar
2018-01-01
This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...
Resolution Enhancement of Multilook Imagery
Energy Technology Data Exchange (ETDEWEB)
Galbraith, Amy E. [Univ. of Arizona, Tucson, AZ (United States)
2004-07-01
This dissertation studies the feasibility of enhancing the spatial resolution of multi-look remotely-sensed imagery using an iterative resolution enhancement algorithm known as Projection Onto Convex Sets (POCS). A multi-angle satellite image modeling tool is implemented, and simulated multi-look imagery is formed to test the resolution enhancement algorithm. Experiments are done to determine the optimal con guration and number of multi-angle low-resolution images needed for a quantitative improvement in the spatial resolution of the high-resolution estimate. The important topic of aliasing is examined in the context of the POCS resolution enhancement algorithm performance. In addition, the extension of the method to multispectral sensor images is discussed and an example is shown using multispectral confocal fluorescence imaging microscope data. Finally, the remote sensing issues of atmospheric path radiance and directional reflectance variations are explored to determine their effect on the resolution enhancement performance.
Directory of Open Access Journals (Sweden)
Lorena Pradenas Rojas
2008-12-01
Full Text Available En este estudio se presenta un modelo matemático para un problema genérico de asignación de personal. Se implementa y evalúa un procedimiento de solución mediante la metaheurística Tabu Search. El algoritmo propuesto es usado para resolver un caso real de asignación de supervisores forestales. Los resultados muestran que el algoritmo desarrollado es eficiente en la resolución de este tipo de problema y tiene un amplio rango de aplicación para otras situaciones reales.This study presents a mathematical model for a generic problem of staff allocation. A solution is implemented and evaluated by means of the Tabu Search metaheuristic. The proposed algorithm is used to solve a real case of forestry supervisors' allocation. The results show that the developed algorithm is efficient solving this kind of problems and that it has a wide range of application for other real situations.
Algorithms for worst-case tolerance optimization
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans; Madsen, Kaj
1979-01-01
New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....
Fast algorithm of adaptive Fourier series
Gao, You; Ku, Min; Qian, Tao
2018-05-01
Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.
Single Image Super Resolution via Sparse Reconstruction
Kruithof, M.C.; Eekeren, A.W.M. van; Dijk, J.; Schutte, K.
2012-01-01
High resolution sensors are required for recognition purposes. Low resolution sensors, however, are still widely used. Software can be used to increase the resolution of such sensors. One way of increasing the resolution of the images produced is using multi-frame super resolution algorithms.
Ballabriga, Rafael; Vilasís-Cardona, Xavier
2009-01-01
Advances in pixel detector technology are opening up new possibilities in many fields of science. Modern High Energy Physics (HEP) experiments use pixel detectors in tracking systems where excellent spatial resolution, precise timing and high signal-to-noise ratio are required for accurate and clean track reconstruction. Many groups are working worldwide to adapt the hybrid pixel technology to other fields such as medical X-ray radiography, protein structure analysis or neutron imaging. The Medipix3 chip is a 256x256 channel hybrid pixel detector readout chip working in Single Photon Counting Mode. It has been developed with a new front-end architecture aimed at eliminating the spectral distortion produced by charge diffusion in highly segmented semiconductor detectors. In the new architecture neighbouring pixels communicate with one another. Charges can be summed event-by-event and the incoming quantum can be assigned as a single hit to the pixel with the biggest charge deposit. In the case where incoming X-...
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is
Equalization Algorithm for Distributed Energy Storage Systems in Islanded AC Microgrids
DEFF Research Database (Denmark)
Aldana, Nelson Leonardo Diaz; Hernández, Adriana Carolina Luna; Quintero, Juan Carlos Vasquez
2015-01-01
This paper presents a centralized strategy for equalizing the state of charge of distributed energy storage systems in an islanded ac microgrid. The strategy is based on a simple algorithm denoted as equalization algorithm, which modifies the charge or discharge ratio on the time, for distributed...
A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-01-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlogn) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376
A simple algorithm for computing positively weighted straight skeletons of monotone polygons.
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-02-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Effects of ray profile modeling on resolution recovery in clinical CT
International Nuclear Information System (INIS)
Hofmann, Christian; Knaup, Michael; Kachelrieß, Marc
2014-01-01
be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). Results: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. Conclusions: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Resolution analysis by random probing
Fichtner, Andreas; van Leeuwen, T.
2015-01-01
We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full‐waveform
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
International Nuclear Information System (INIS)
2003-05-01
To put a resolution to the meeting in relation with the use of weapons made of depleted uranium is the purpose of this text. The situation of the use of depleted uranium by France during the Gulf war and other recent conflicts will be established. This resolution will give the most strict recommendations face to the eventual sanitary and environmental risks in the use of these kind of weapons. (N.C.)
Resolution enhancement of low quality videos using a high-resolution frame
Pham, T.Q.; Van Vliet, L.J.; Schutte, K.
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of
A novel super-resolution camera model
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
A Cooperative Framework for Fireworks Algorithm.
Zheng, Shaoqiu; Li, Junzhi; Janecek, Andreas; Tan, Ying
2017-01-01
This paper presents a cooperative framework for fireworks algorithm (CoFFWA). A detailed analysis of existing fireworks algorithm (FWA) and its recently developed variants has revealed that ( i) the current selection strategy has the drawback that the contribution of the firework with the best fitness (denoted as core firework) overwhelms the contributions of all other fireworks (non-core fireworks) in the explosion operator, ( ii) the Gaussian mutation operator is not as effective as it is designed to be. To overcome these limitations, the CoFFWA is proposed, which significantly improves the exploitation capability by using an independent selection method and also increases the exploration capability by incorporating a crowdness-avoiding cooperative strategy among the fireworks. Experimental results on the CEC2013 benchmark functions indicate that CoFFWA outperforms the state-of-the-art FWA variants, artificial bee colony, differential evolution, and the standard particle swarm optimization SPSO2007/SPSO2011 in terms of convergence performance.
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Optimizing Raytracing Algorithm Using CUDA
Directory of Open Access Journals (Sweden)
Sayed Ahmadreza Razian
2017-11-01
The results show that one can generate at least 11 frames per second in HD (720p resolution by GPU processor and GT 840M graphic card, using trace method. If better graphic card employ, this algorithm and program can be used to generate real-time animation.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Fourier rebinning algorithm for inverse geometry CT.
Mazin, Samuel R; Pele, Norbert J
2008-11-01
Inverse geometry computed tomography (IGCT) is a new type of volumetric CT geometry that employs a large array of x-ray sources opposite a smaller detector array. Volumetric coverage and high isotropic resolution produce very large data sets and therefore require a computationally efficient three-dimensional reconstruction algorithm. The purpose of this work was to adapt and evaluate a fast algorithm based on Defrise's Fourier rebinning (FORE), originally developed for positron emission tomography. The results were compared with the average of FDK reconstructions from each source row. The FORE algorithm is an order of magnitude faster than the FDK-type method for the case of 11 source rows. In the center of the field-of-view both algorithms exhibited the same resolution and noise performance. FORE exhibited some resolution loss (and less noise) in the periphery of the field-of-view. FORE appears to be a fast and reasonably accurate reconstruction method for IGCT.
Denotational semantics in Synthetic Guarded Domain Theory
DEFF Research Database (Denmark)
Paviotti, Marco
In functional programming, features such as recursion, recursive types and general references are central. To define semantics of this kind of languages one needs to come up with certain definitions which may be non-trivial to show well-defined. This is because they are circular. Domain theory has...... been used to solve this kind of problems for specific languages, unfortunately, this technique does not scale for more featureful languages, which prevented it from being widely used. Step-indexing is a more general technique that has been used to break circularity of definitions. The idea is to tweak...... the definition by adding a well- founded structure that gives a handle for recursion. Guarded dependent Type Theory (gDTT) is a type theory which implements step-indexing via a unary modality used to guard recursive definitions. Every circular definition is well-defined as long as the recursive variable...
Denotational semantics for guarded dependent type theory
DEFF Research Database (Denmark)
Bizjak, Aleš; Møgelberg, Rasmus Ejlers
2018-01-01
We present a new model of Guarded Dependent Type Theory (GDTT), a type theory with guarded recursion and multiple clocks in which one can program with, and reason about coinductive types. Productivity of recursively defined coinductive programs and proofs is encoded in types using guarded recursion......, crucial for programming with coinductive types, types must be interpreted as presheaves orthogonal to the object of clocks. In the case of dependent types, this translates to a unique lifting condition similar to the one found in homotopy theoretic models of type theory. Since the universes defined...... by inclusions of clock variable contexts commute on the nose with type operations on the universes....
A denotational theory of synchronous reactive systems
Benveniste , Albert; Le Guernic , Paul; Sorel , Yves; Sorine , Michel
1992-01-01
International audience; In this paper, systems which interact permanently with their environments are considered. Such systems are encountered, for instance, in real-time control or signal processing systems, C3-systems, and man-machine interfaces, to mention just a few cases. The design and implementation of such systems require a concurrent programming language which can be used to verify and synthesize the synchronization mechanisms, and to perform transformations of the concurrent source ...
Robust MST-Based Clustering Algorithm.
Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing
2018-06-01
Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
SPECT imaging with resolution recovery
International Nuclear Information System (INIS)
Bronnikov, A. V.
2011-01-01
Single-photon emission computed tomography (SPECT) is a method of choice for imaging spatial distributions of radioisotopes. Many applications of this method are found in nuclear industry, medicine, and biomedical research. We study mathematical modeling of a micro-SPECT system by using a point-spread function (PSF) and implement an OSEM-based iterative algorithm for image reconstruction with resolution recovery. Unlike other known implementations of the OSEM algorithm, we apply en efficient computation scheme based on a useful approximation of the PSF, which ensures relatively fast computations. The proposed approach can be applied with the data acquired with any type of collimators, including parallel-beam fan-beam, cone-beam and pinhole collimators. Experimental results obtained with a micro SPECT system demonstrate high efficiency of resolution recovery. (authors)
Energy Technology Data Exchange (ETDEWEB)
2017-04-25
Gap Resolution is a software package that was developed to improve Newbler genome assemblies by automating the closure of sequence gaps caused by repetitive regions in the DNA. This is done by performing the follow steps:1) Identify and distribute the data for each gap in sub-projects. 2) Assemble the data associated with each sub-project using a secondary assembler, such as Newbler or PGA. 3) Determine if any gaps are closed after reassembly, and either design fakes (consensus of closed gap) for those that closed or lab experiments for those that require additional data. The software requires as input a genome assembly produce by the Newbler assembler provided by Roche and 454 data containing paired-end reads.
Computed laminography and reconstruction algorithm
International Nuclear Information System (INIS)
Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao
2012-01-01
Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)
A Robust Parallel Algorithm for Combinatorial Compressed Sensing
Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian
2018-04-01
In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Fast generation of multiple resolution instances of raster data sets
Arge, L.; Haverkort, H.J.; Tsirogiannis, C.P.
2012-01-01
In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this
Deconvolution algorithms applied in ultrasonics
International Nuclear Information System (INIS)
Perrot, P.
1993-12-01
In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs
Directory of Open Access Journals (Sweden)
汪耀華 Yao-Hua Wang
2018-03-01
Full Text Available 本研究旨在探討磨課師（MOOCs）教學經驗隱含之意義。以現象學的深度訪談與資料分析步驟，從受訪教師內隱的心理體驗歷程，歸納屬於人們共通性現象的經驗之意義，可分五大項陳述：傳達以學生為中心的訊息、編導故事情境的教學影片、增加多元學習經驗比小知識重要、放下名利得失心、創新教學的熱情與執行力。綜合上述描述，發現MOOCs授課教師具備五項教學經驗的意義：一、轉化專業實務經驗：將實務歷練注入教材製作，不同於學校課程所教授的內容與型態，讓學生感知有用及興趣。二、善用多媒體建構教材：教師構思教學劇本架構，編導影片或動畫，與生活面連結讓學生易於理解，引導學生想像空間預見未來。三、增加多元學習經驗：引用多元資源教學，安排實務專家參與課程，擴增學生視野及知識。四、精進數位教學知能：放下個人名利心，體驗MOOCs教學，獲得磨練及反思。五、做中學的精神：懷抱創新教學的熱情，勇於嘗試執行。最後，從反思MOOCs授課教師教學經驗的意 義，得到啟示並提出建議。 This study analyzes the denotations pertaining to instructors’ teaching experiences of Massive Open Online Courses (MOOCs. By conducting a detailed phenomenological interview and data analysis, the study provides the denotations pertaining to common human phenomena through intrinsic psychological processes experienced by the instructors. The denotations were described on the basis of five perspectives—Delivering a learner-centered message, editing a teaching video with a story scenario, valuing the diversity of the learning experience over the acquisition of basic knowledge, setting aside fame and profit during teaching, and displaying the power of enthusiastic innovative teaching. On the basis of the five aforementioned descriptions, five
Evaluation of deep neural networks for single image super-resolution in a maritime context
Nieuwenhuizen, R.P.J.; Kruithof, M.; Schutte, K.
2017-01-01
High resolution imagery is of crucial importance for the performance on visual recognition tasks. Super-resolution (SR) reconstruction algorithms aim to enhance the image resolution beyond the capability of the image sensor being used. Traditional SR algorithms approach this inverse problem using
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
Parallelization of a blind deconvolution algorithm
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
High-Resolution Sonars: What Resolution Do We Need for Target Recognition?
Directory of Open Access Journals (Sweden)
Pailhas Yan
2010-01-01
Full Text Available Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.
Speckle imaging algorithms for planetary imaging
Energy Technology Data Exchange (ETDEWEB)
Johansson, E. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Algorithmic randomness and physical entropy
International Nuclear Information System (INIS)
Zurek, W.H.
1989-01-01
Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Coordination Logic for Repulsive Resolution Maneuvers
Narkawicz, Anthony J.; Munoz, Cesar A.; Dutle, Aaron M.
2016-01-01
This paper presents an algorithm for determining the direction an aircraft should maneuver in the event of a potential conflict with another aircraft. The algorithm is implicitly coordinated, meaning that with perfectly reliable computations and information, it will in- dependently provide directional information that is guaranteed to be coordinated without any additional information exchange or direct communication. The logic is inspired by the logic of TCAS II, the airborne system designed to reduce the risk of mid-air collisions between aircraft. TCAS II provides pilots with only vertical resolution advice, while the proposed algorithm, using a similar logic, provides implicitly coordinated vertical and horizontal directional advice.
Comparison of turbulence mitigation algorithms
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Fast generation of multiple resolution instances of raster data sets
DEFF Research Database (Denmark)
Arge, Lars; Haverkort, Herman; Tsirogiannis, Constantinos
2012-01-01
In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this prob......In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants...... in the main memory of the computer. We also provide two algorithms that solve this problem in external memory, that is when the input raster is larger than the main memory. The first external algorithm is very easy to implement and requires O(sort(N)) data block transfers from/to the external memory....... For this variant we describe an algorithm that runs in (U logN) time in internal memory, where U is the size of the output. We show how this algorithm can be adapted to perform efficiently in the external memory using O(sort(U)) data transfers from the disk. We have also implemented two of the presented algorithms...
LAI inversion algorithm based on directional reflectance kernels.
Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D
2007-11-01
Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Automated Conflict Resolution For Air Traffic Control
Erzberger, Heinz
2005-01-01
The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Catheter Calibration Using Template Matching Line Interpolation Algorithm
National Research Council Canada - National Science Library
Nagy, L
2001-01-01
..., such as: image resolution, type of the calibration, algorithm used for contour detection, size of the FOV, other parameters of the image The studied calibration method is the one using catheter size...
Statistical Assessment of Gene Fusion Detection Algorithms using RNASequencing Data
Varadan, V.; Janevski, A.; Kamalakaran, S.; Banerjee, N.; Harris, L.; Dimitrova, D.
2012-01-01
The detection and quantification of fusion transcripts has both biological and clinical implications. RNA sequencing technology provides a means for unbiased and high resolution characterization of fusion transcript information in tissue samples. We evaluated two fusiondetection algorithms,
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Stratway: A Modular Approach to Strategic Conflict Resolution
Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.
2011-01-01
In this paper we introduce Stratway, a modular approach to finding long-term strategic resolutions to conflicts between aircraft. The modular approach provides both advantages and disadvantages. Our primary concern is to investigate the implications on the verification of safety-critical properties of a strategic resolution algorithm. By partitioning the problem into verifiable modules much stronger verification claims can be established. Since strategic resolution involves searching for solutions over an enormous state space, Stratway, like most similar algorithms, searches these spaces by applying heuristics, which present especially difficult verification challenges. An advantage of a modular approach is that it makes a clear distinction between the resolution function and the trajectory generation function. This allows the resolution computation to be independent of any particular vehicle. The Stratway algorithm was developed in both Java and C++ and is available through a open source license. Additionally there is a visualization application that is helpful when analyzing and quickly creating conflict scenarios.
Very low resolution face recognition problem.
Zou, Wilman W W; Yuen, Pong C
2012-01-01
This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.
Learning from errors in super-resolution.
Tang, Yi; Yuan, Yuan
2014-11-01
A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.
Yan, Kang K; Zhao, Hongyu; Pang, Herbert
2017-12-06
High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.
Chandra ACIS Sub-pixel Resolution
Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.
2011-05-01
We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy
Multi-resolution inversion algorithm for the attenuated radon transform
Barbano, Paolo Emilio; Fokas, Athanasios S.
2011-01-01
We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
Modelling and Numerical Analysis, 40(5):815-841, 2006. [31] Michael Dumbser, Martin Kaser, and Eleuterio F. Toro. An arbitrary high-order Discontinuous...proximation of PML, SIAM J. Numer. Anal., 41 (2003), pp. 287-305. [60] E. BECACHE, S. FAUQUEUX, AND P. JOLY , Stability of perfectly matched layers, group...time-domain performance analysis, IEEE Trans, on Magnetics, 38 (2002), pp. 657- 660. [64] J. DIAZ AND P. JOLY , An analysis of higher-order boundary
Energy Technology Data Exchange (ETDEWEB)
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275 (China); Zhang, Jiahan; Lipson, Edward [Department of Physics, Syracuse University, Syracuse, New York 13244 (United States); Krol, Andrzej; Feiglin, David [Department of Radiology, SUNY Upstate Medical University, Syracuse, New York 13210 (United States); Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vogelsang, Levon [Carestream Health, Rochester, New York 14608 (United States); Shen, Lixin [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275, China and Department of Mathematics, Syracuse University, Syracuse, New York 13244 (United States)
2015-08-15
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean
International Nuclear Information System (INIS)
Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Solving Hub Network Problem Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Mursyid Hasan Basri
2012-01-01
Full Text Available This paper addresses a network problem that described as follows. There are n ports that interact, and p of those will be designated as hubs. All hubs are fully interconnected. Each spoke will be allocated to only one of available hubs. Direct connection between two spokes is allowed only if they are allocated to the same hub. The latter is a distinct characteristic that differs it from pure hub-and-spoke system. In case of pure hub-and-spoke system, direct connection between two spokes is not allowed. The problem is where to locate hub ports and to which hub a spoke should be allocated so that total transportation cost is minimum. In the first model, there are some additional aspects are taken into consideration in order to achieve a better representation of the problem. The first, weekly service should be accomplished. Secondly, various vessel types should be considered. The last, a concept of inter-hub discount factor is introduced. Regarding the last aspect, it represents cost reduction factor at hub ports due to economies of scale. In practice, it is common that the cost rate for inter-hub movement is less than the cost rate for movement between hub and origin/destination. In this first model, inter-hub discount factor is assumed independent with amount of flows on inter-hub links (denoted as flow-independent discount policy. The results indicated that the patterns of enlargement of container ship size, to some degree, are similar with those in Kurokawa study. However, with regard to hub locations, the results have not represented the real practice. In the proposed model, unsatisfactory result on hub locations is addressed. One aspect that could possibly be improved to find better hub locations is inter-hub discount factor. Then inter-hub discount factor is assumed to depend on amount of inter-hub flows (denoted as flow-dependent discount policy. There are two discount functions examined in this paper. Both functions are characterized by
An interactive, web-based tool for genealogical entity resolution
Efremova, I.; Ranjbar-Sahraei, B.; Oliehoek, F.A.; Calders, T.G.K.; Tuyls, K.P.
2013-01-01
We demonstrate an interactive, web-based tool which helps historians to do Genealogical Entitiy Resolution. This work has two main goals. First, it uses Machine Learning (ML) algorithms to assist humanites researchers to perform Genealogical Entity Resolution. Second, it facilitates the generation
Generic Entity Resolution in Relational Databases
Sidló, Csaba István
Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
High resolution tomographic instrument development
International Nuclear Information System (INIS)
1992-01-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational
High resolution tomographic instrument development
Energy Technology Data Exchange (ETDEWEB)
1992-08-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.
High resolution tomographic instrument development
Energy Technology Data Exchange (ETDEWEB)
1992-01-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.
Environmental Systems Conflict Resolution
Hipel, K. W.
2017-12-01
The Graph Model for Conflict Resolution (GMCR) is applied to a real-life groundwater contamination dispute to demonstrate how one can realistically model and analyze the controversy in order to obtain an enhanced understanding and strategic insights for permitting one to make informed decisions. This highly divisive conflict is utilized to explain a rich range of inherent capabilities of GMCR, as well as worthwhile avenues for extensions, which make GMCR a truly powerful decision technology for addressing challenging conflict situations. For instance, a flexible preference elicitation method called option prioritization can be employed to obtain the relative preferences of each decision maker (DM) in the dispute over the states or scenarios which can occur, based upon preference statements regarding the options or courses of actions available to the DMs. Solution concepts, reflecting the way a chess player thinks in terms of moves and counter-moves, are defined to mirror the ways humans may behave under conflict, varying from short to long term thinking. After ascertaining the best outcome that a DM can achieve on his or her own in a conflict, coalition analysis algorithms are available to check if a DM can fare even better via cooperating with others. The ability of GMCR to take into account emotions, strength of preference, attitudes, misunderstandings (referred to as hypergames), and uncertain preferences (unknown, fuzzy, grey and probabilistic) greatly broadens its scope of applicability. Techniques for tracing how a conflict can evolve over time from a status quo state to a final specified outcome, as well as how to handle hierarchical structures, such as when a central government interacts with its provinces or states, further enforces the comprehensive nature of GMCR. Within ongoing conflict research mimicking how physical systems are analyzed, methods for inverse engineering of preferences are explained for determining the preferences required by one or
A micro-hydrology computation ordering algorithm
Croley, Thomas E.
1980-11-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.
A micro-hydrology computation ordering algorithm
International Nuclear Information System (INIS)
Croley, T.E. II
1980-01-01
Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented node definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing micro-hydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. (orig.)
Della Ceca, Lara Sofia; Carreras, Hebe A.; Lyapustin, Alexei I.; Barnaba, Francesca
2016-04-01
Particulate matter (PM) is one of the major harmful pollutants to public health and the environment [1]. In developed countries, specific air-quality legislation establishes limit values for PM metrics (e.g., PM10, PM2.5) to protect the citizens health (e.g., European Commission Directive 2008/50, US Clean Air Act). Extensive PM measuring networks therefore exist in these countries to comply with the legislation. In less developed countries air quality monitoring networks are still lacking and satellite-based datasets could represent a valid alternative to fill observational gaps. The main PM (or aerosol) parameter retrieved from satellite is the 'aerosol optical depth' (AOD), an optical parameter quantifying the aerosol load in the whole atmospheric column. Datasets from the MODIS sensors on board of the NASA spacecrafts TERRA and AQUA are among the longest records of AOD from space. However, although extremely useful in regional and global studies, the standard 10 km-resolution MODIS AOD product is not suitable to be employed at the urban scale. Recently, a new algorithm called Multi-Angle Implementation of Atmospheric Correction (MAIAC) was developed for MODIS, providing AOD at 1 km resolution [2]. In this work, the MAIAC AOD retrievals over the decade 2003-2013 were employed to investigate the spatiotemporal variation of atmospheric aerosols over the Argentinean city of Cordoba and its surroundings, an area where a very scarce dataset of in situ PM data is available. The MAIAC retrievals over the city were firstly validated using a 'ground truth' AOD dataset from the Cordoba sunphotometer operating within the global AERONET network [3]. This validation showed the good performances of the MAIAC algorithm in the area. The satellite MAIAC AOD dataset was therefore employed to investigate the 10-years trend as well as seasonal and monthly patterns of particulate matter in the Cordoba city. The first showed a marked increase of AOD over time, particularly evident in
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
DEFF Research Database (Denmark)
Nasrollahi, Kamal; Moeslund, Thomas B.
2014-01-01
Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real world problems in different fields, from satellite...
Conflict Resolution in Computer Systems
Directory of Open Access Journals (Sweden)
G. P. Mojarov
2015-01-01
shortcoming in preventing impasses is a need to have a priori information on the future demand for resources, and it is not always possible.One of ways to "struggle" against impasses when there is no a priori information on the process demand for resources is to detect deadlocks. Detection of impasses (without leading to their resolution yet is a periodical use of the algorithm which checks current distribution of resources to reveal whether there is an impasse and if it exists what processes are involved in it.The work objective is to develop methods and algorithms allowing us to minimize losses because of impasses in CS using the optimum strategy of conflict resolution. The offered approach is especially effective to eliminate deadlocks in management (control computer systems having a fixed set of programmes.The article offers a developed efficient strategy of the information processes management in multiprocessing CS, which detects and prevents impasses. The strategy is based on allocation of indivisible resources to computing processes so that losses caused by conflicts are minimized. The article studies a multi-criterion problem of indivisible resources allocation to the processes, with the optimality principle expressed by the known binary relation over set of average vectors of penalties for conflicts in each of resources. It is shown that sharing a decision theory tool and a classical one allows more efficient problem solution to eliminate deadlock. The feature of suggesting effective methods and algorithms to eliminate deadlocks is that they can be used in CS development and operation in real time. The article-given example shows that the proposed method and algorithm for the impasse resolution in multiprocessing CS are capable and promising.The offered method and algorithm provide reducing the average number of CS conflicts by 30-40 %.
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Implementation of a Wavefront-Sensing Algorithm
Smith, Jeffrey S.; Dean, Bruce; Aronstein, David
2013-01-01
A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.
An efficient and fast detection algorithm for multimode FBG sensing
DEFF Research Database (Denmark)
Ganziy, Denis; Jespersen, O.; Rose, B.
2015-01-01
We propose a novel dynamic gate algorithm (DGA) for fast and accurate peak detection. The algorithm uses threshold determined detection window and Center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of signal to noise ratio...... and different typical peak shapes. Our simulations and experiments demonstrate that the DGA method is fast and robust with higher stability and accuracy compared to conventional algorithms. This makes it very attractive for future implementation in sensing systems especially based on multimode fiber Bragg...
Resolution enhancement of low-quality videos using a high-resolution frame
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
International Nuclear Information System (INIS)
Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing
2015-01-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)
High-resolution and super stacking of time-reversal mirrors in locating seismic sources
Cao, Weiping
2011-07-08
Time reversal mirrors can be used to backpropagate and refocus incident wavefields to their actual source location, with the subsequent benefits of imaging with high-resolution and super-stacking properties. These benefits of time reversal mirrors have been previously verified with computer simulations and laboratory experiments but not with exploration-scale seismic data. We now demonstrate the high-resolution and the super-stacking properties in locating seismic sources with field seismic data that include multiple scattering. Tests on both synthetic data and field data show that a time reversal mirror has the potential to exceed the Rayleigh resolution limit by factors of 4 or more. Results also show that a time reversal mirror has a significant resilience to strong Gaussian noise and that accurate imaging of source locations from passive seismic data can be accomplished with traces having signal-to-noise ratios as low as 0.001. Synthetic tests also demonstrate that time reversal mirrors can sometimes enhance the signal by a factor proportional to the square root of the product of the number of traces, denoted as N and the number of events in the traces. This enhancement property is denoted as super-stacking and greatly exceeds the classical signal-to-noise enhancement factor of. High-resolution and super-stacking are properties also enjoyed by seismic interferometry and reverse-time migration with the exact velocity model. © 2011 European Association of Geoscientists & Engineers.
OTTER, Resolution Style Theorem Prover
International Nuclear Information System (INIS)
McCune, W.W.
2001-01-01
1 - Description of program or function: OTTER (Other Techniques for Theorem-proving and Effective Research) is a resolution-style theorem-proving program for first-order logic with equality. OTTER includes the inference rules binary resolution, hyper-resolution, UR-resolution, and binary para-modulation. These inference rules take as small set of clauses and infer a clause. If the inferred clause is new and useful, it is stored and may become available for subsequent inferences. Other capabilities are conversion from first-order formulas to clauses, forward and back subsumption, factoring, weighting, answer literals, term ordering, forward and back demodulation, and evaluable functions and predicates. 2 - Method of solution: For its inference process OTTER uses the given-clause algorithm, which can be viewed as a simple implementation of the set of support strategy. OTTER maintains three lists of clauses: axioms, sos (set of support), and demodulators. OTTER is not automatic. Even after the user has encoded a problem into first-order logic or into clauses, the user must choose inference rules, set options to control the processing of inferred clauses, and decide which input formulae or clauses are to be in the initial set of support and which, if any, equalities are to be demodulators. If OTTER fails to find a proof, the user may try again different initial conditions. 3 - Restrictions on the complexity of the problem - Maxima of: 5000 characters in an input string, 64 distinct variables in a clause, 51 characters in any symbol. The maxima can be changed by finding the appropriate definition in the header.h file, increasing the limit, and recompiling OTTER. There are a few constraints on the order of commands
National Research Council Canada - National Science Library
Sundareshan, Malur
2001-01-01
This project was primarily aimed at the design of novel algorithms for the restoration and super-resolution processing of imagery data to improve the resolution in images acquired from practical sensing operations...
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
A study of spatial resolution in pollution exposure modelling
Directory of Open Access Journals (Sweden)
Gustafsson Susanna
2007-06-01
Full Text Available Abstract Background This study is part of several ongoing projects concerning epidemiological research into the effects on health of exposure to air pollutants in the region of Scania, southern Sweden. The aim is to investigate the optimal spatial resolution, with respect to temporal resolution, for a pollutant database of NOx-values which will be used mainly for epidemiological studies with durations of days, weeks or longer periods. The fact that a pollutant database has a fixed spatial resolution makes the choice critical for the future use of the database. Results The results from the study showed that the accuracy between the modelled concentrations of the reference grid with high spatial resolution (100 m, denoted the fine grid, and the coarser grids (200, 400, 800 and 1600 meters improved with increasing spatial resolution. When the pollutant values were aggregated in time (from hours to days and weeks the disagreement between the fine grid and the coarser grids were significantly reduced. The results also illustrate a considerable difference in optimal spatial resolution depending on the characteristic of the study area (rural or urban areas. To estimate the accuracy of the modelled values comparison were made with measured NOx values. The mean difference between the modelled and the measured value were 0.6 μg/m3 and the standard deviation 5.9 μg/m3 for the daily difference. Conclusion The choice of spatial resolution should not considerably deteriorate the accuracy of the modelled NOx values. Considering the comparison between modelled and measured values we estimate that an error due to coarse resolution greater than 1 μg/m3 is inadvisable if a time resolution of one day is used. Based on the study of different spatial resolutions we conclude that for urban areas a spatial resolution of 200–400 m is suitable; and for rural areas the spatial resolution could be coarser (about 1600 m. This implies that we should develop a pollutant
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Fuzzy Genetic Algorithm Based on Principal Operation and Inequity Degree
Li, Fachao; Jin, Chenxia
In this paper, starting from the structure of fuzzy information, by distinguishing principal indexes and assistant indexes, give comparison of fuzzy information on synthesizing effect and operation of fuzzy optimization on principal indexes transformation, further, propose axiom system of fuzzy inequity degree from essence of constraint, and give an instructive metric method; Then, combining genetic algorithm, give fuzzy optimization methods based on principal operation and inequity degree (denoted by BPO&ID-FGA, for short); Finally, consider its convergence using Markov chain theory and analyze its performance through an example. All these indicate, BPO&ID-FGA can not only effectively merge decision consciousness into the optimization process, but possess better global convergence, so it can be applied to many fuzzy optimization problems.
Almagest, a new trackless ring finding algorithm
Energy Technology Data Exchange (ETDEWEB)
Lamanna, G., E-mail: gianluca.lamanna@cern.ch
2014-12-01
A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Super resolution for astronomical observations
Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng
2018-05-01
In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.
A medium resolution fingerprint matching system
Directory of Open Access Journals (Sweden)
Ayman Mohammad Bahaa-Eldin
2013-09-01
Full Text Available In this paper, a novel minutiae based fingerprint matching system is proposed. The system is suitable for medium resolution fingerprint images obtained by low cost commercial sensors. The paper presents a new thinning algorithm, a new features extraction and representation, and a novel feature distance matching algorithm. The proposed system is rotation and translation invariant and is suitable for complete or partial fingerprint matching. The proposed algorithms are optimized to be executed on low resource environments both in CPU power and memory space. The system was evaluated using a standard fingerprint dataset and good performance and accuracy were achieved under certain image quality requirements. In addition, the proposed system was compared favorably to that of the state of the art systems.
DEFF Research Database (Denmark)
Petersen, Mette Højgaard; Edlund, Kristian; Hansen, Lars Henrik
2013-01-01
The word flexibility is central to Smart Grid literature, but still a formal definition of flexibility is pending. This paper present a taxonomy for flexibility modeling denoted Buckets, Batteries and Bakeries. We consider a direct control Virtual Power Plant (VPP), which is given the task...... of servicing a portfolio of flexible consumers by use of a fluctuating power supply. Based on the developed taxonomy we first prove that no causal optimal dispatch strategies exist for the considered problem. We then present two heuristic algorithms for solving the balancing task: Predictive Balancing...
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
System engineering approach to GPM retrieval algorithms
Energy Technology Data Exchange (ETDEWEB)
Rose, C. R. (Chris R.); Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
International Nuclear Information System (INIS)
Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.
1985-01-01
The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs
Cost optimization model and its heuristic genetic algorithms
International Nuclear Information System (INIS)
Liu Wei; Wang Yongqing; Guo Jilin
1999-01-01
Interest and escalation are large quantity in proportion to the cost of nuclear power plant construction. In order to optimize the cost, the mathematics model of cost optimization for nuclear power plant construction was proposed, which takes the maximum net present value as the optimization goal. The model is based on the activity networks of the project and is an NP problem. A heuristic genetic algorithms (HGAs) for the model was introduced. In the algorithms, a solution is represented with a string of numbers each of which denotes the priority of each activity for assigned resources. The HGAs with this encoding method can overcome the difficulty which is harder to get feasible solutions when using the traditional GAs to solve the model. The critical path of the activity networks is figured out with the concept of predecessor matrix. An example was computed with the HGAP programmed in C language. The results indicate that the model is suitable for the objectiveness, the algorithms is effective to solve the model
Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS
Directory of Open Access Journals (Sweden)
Stephan P. Lovstedt
2008-01-01
Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.
Automated Verification of Spatial Resolution in Remotely Sensed Imagery
Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald
2011-01-01
Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data
International Nuclear Information System (INIS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
A Fast DOA Estimation Algorithm Based on Polarization MUSIC
Directory of Open Access Journals (Sweden)
R. Guo
2015-04-01
Full Text Available A fast DOA estimation algorithm developed from MUSIC, which also benefits from the processing of the signals' polarization information, is presented. Besides performance enhancement in precision and resolution, the proposed algorithm can be exerted on various forms of polarization sensitive arrays, without specific requirement on the array's pattern. Depending on the continuity property of the space spectrum, a huge amount of computation incurred in the calculation of 4-D space spectrum is averted. Performance and computation complexity analysis of the proposed algorithm is discussed and the simulation results are presented. Compared with conventional MUSIC, it is indicated that the proposed algorithm has considerable advantage in aspects of precision and resolution, with a low computation complexity proportional to a conventional 2-D MUSIC.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Automated conflict resolution issues
Wike, Jeffrey S.
1991-01-01
A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.
Institute of Scientific and Technical Information of China (English)
SHIH Timothy K; CHANG Rong-chi
2005-01-01
Image or video resources are often received in poor condition, mostly with noise or defects making the resources hard to read. We propose an effective algorithm based on digital image inpainting. The mechanism can be used in restoring images or video frames with very high noise or defect ratio (e.g., 90%). The algorithm is based on the concept of image subdivision and estimation of color variations. Noises inside blocks of different sizes are inpainted with different levels of surrounding information.The results showed that an almost unrecognizable image can be recovered with visually good result. The algorithm can be further extended for processing motion picture with high percentage of noise.
Structure-Based Algorithms for Microvessel Classification
Smith, Amy F.
2015-02-01
© 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.
Level-0 trigger algorithms for the ALICE PHOS detector
Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J
2011-01-01
The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...
Sequential unconstrained minimization algorithms for constrained optimization
International Nuclear Information System (INIS)
Byrne, Charles
2008-01-01
The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
Optimisation of centroiding algorithms for photon event counting imaging
International Nuclear Information System (INIS)
Suhling, K.; Airey, R.W.; Morgan, B.L.
1999-01-01
Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs
DEFF Research Database (Denmark)
N. Gordon, Jeffery; Ringe, Georg
2015-01-01
Bank resolution is a key pillar of the European Banking Union. This column argues that the current structure of large EU banks is not conducive to an effective and unbiased resolution procedure. The authors would require systemically important banks to reorganise into a ‘holding company’ structure......, where the parent company holds unsecured term debt sufficient to cover losses at its operating financial subsidiaries. This would facilitate a ‘single point of entry’ resolution procedure, minimising the risk of creditor runs and destructive ring-fencing by national regulators....
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
High-Resolution PET Detector. Final report
International Nuclear Information System (INIS)
Karp, Joel
2014-01-01
The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface
Resolution effects on the morphology of calcifications in digital mammograms
Energy Technology Data Exchange (ETDEWEB)
Kallergi, Maria; He, Li; Gavrielides, Marios; Heine, John; Clarke, Laurence P [Department of Radiology, College of Medicine, and H. Lee Moffitt Cancer Center and Research Institute at the University of South Florida, 12901 Bruce B. Downs Blvd., Box 17, Tampa, FL 33612 (United States)
1999-12-31
The development of computer assisted diagnosis (CAD) techniques and direct digital mammography systems have generated significant interest in the issue of the effect of image resolution on the detection and classification (benign vs malignant) of mammographic abnormalities. CAD in particular seems to heavily depend on image resolution, either due to the inherent algorithm design and optimization, which is almost always dependent, or due to the differences in image content at the various resolutions. This twofold dependence makes it even more difficult to answer the question of what is the minimum resolution required for successful detection and/or classification of a specific mammographic abnormality, such as calcifications. One may begin by evaluating the losses in the mammograms as the films are digitized with different pixel sizes and depths. In this paper we attempted to measure these losses for the case of calcifications at four different spatial resolutions through a simulation model and a classification scheme that is based only on morphological features. The results showed that a 60 {mu}m pixel size and 12 bits per pixel should at least be used if the morphology and distribution of the calcifications are essential components in the CAD algorithm design. These conclusions were tested with the use of a wavelet-based algorithm for the segmentation of simulated mammographic calcifications at various resolutions. The evaluation of the segmentation through shape analysis and classification supported the initial conclusion. (authors) 14 refs., 1 tabs.
Resolution Enhancement Method Used for Force Sensing Resistor Array
Directory of Open Access Journals (Sweden)
Karen Flores De Jesus
2015-01-01
Full Text Available Tactile sensors are one of the major devices that enable robotic systems to interact with the surrounding environment. This research aims to propose a mathematical model to describe the behavior of a tactile sensor based on experimental and statistical analyses and moreover to develop a versatile algorithm that can be applied to different tactile sensor arrays to enhance the limited resolution. With the proposed algorithm, the resolution can be increased up to twenty times if multiple measurements are available. To verify if the proposed algorithm can be used for tactile sensor arrays that are used in robotic system, a 16×10 force sensing array (FSR is adopted. The acquired two-dimensional measurements were processed by a resolution enhancement method (REM to enhance the resolution, which can be used to improve the resolution for single image or multiple measurements. As a result, the resolution of the sensor is increased and it can be used as synthetic skin to identify accurate shapes of objects and applied forces.
A fast autofocus algorithm for synthetic aperture radar processing
DEFF Research Database (Denmark)
Dall, Jørgen
1992-01-01
High-resolution synthetic aperture radar (SAR) imaging requires the motion of the radar platform to be known very accurately. Otherwise, phase errors are induced in the processing of the raw SAR data, and bad focusing results. In particular, a constant error in the measured along-track velocity o...... of magnitude lower than that of other algorithms providing comparable accuracies is presented. The algorithm has been tested on data from the Danish Airborne SAR, and the performance is compared with that of the traditional map drift algorithm...
Super-resolution of facial images in forensics scenarios
DEFF Research Database (Denmark)
Satiro, Joao; Nasrollahi, Kamal; Correia, Paulo
2015-01-01
-resolution (SR) algorithms might be used. But, the problem with these algorithms is that they mostly require motion estimation between LR and low-quality images which is not always practical. To deal with this, we first simply interpolate the LR input images and then perform motion estimation. The estimated...... motion parameters are then used in a non-local mean-based SR algorithm to produce a higher quality image. This image is further fused with the interpolated version of the reference image via an alpha-blending approach. The experimental results on benchmark datasets and locally collected videos from...
A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem
Directory of Open Access Journals (Sweden)
Jian Gao
2011-08-01
Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.
High Resolution Elevation Contours
Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.
Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging
Energy Technology Data Exchange (ETDEWEB)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.
2011-07-01
We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)
Ultra high resolution tomography
Energy Technology Data Exchange (ETDEWEB)
Haddad, W.S.
1994-11-15
Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.
High resolution positron tomography
International Nuclear Information System (INIS)
Brownell, G.L.; Burnham, C.A.
1982-01-01
The limits of spatial resolution in practical positron tomography are examined. The four factors that limit spatial resolution are: positron range; small angle deviation; detector dimensions and properties; statistics. Of these factors, positron range may be considered the fundamental physical limitation since it is independent of instrument properties. The other factors are to a greater or lesser extent dependent on the design of the tomograph
Scalable Resolution Display Walls
Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen
2013-01-01
This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.
An efficient community detection algorithm using greedy surprise maximization
International Nuclear Information System (INIS)
Jiang, Yawen; Jia, Caiyan; Yu, Jian
2014-01-01
Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)
FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker
Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou
2017-06-01
A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.
Resolution 1540 (2004) overview
International Nuclear Information System (INIS)
Kasprzyk, N.
2013-01-01
This series of slides presents the Resolution 1540, its features and its status of implementation. Resolution 1540 is a response to the risk that non-State actors may acquire, develop, traffic in weapons of mass destruction and their means of delivery. Resolution 1540 was adopted on 28 April 2004 by the U.N. Security Council at the unanimity of its members. Resolution 1540 deals with the 3 kinds of weapons of mass destruction (nuclear, chemical and biological weapons) as well as 'related materials'. This resolution implies 3 sets of obligations: first no support of non-state actors concerning weapons of mass destruction, secondly to set national laws that prohibit any non-state actors to deal with weapons of mass destruction and thirdly to enforce domestic control to prevent the proliferation of nuclear, chemical or biological weapons and their means of delivery. Four working groups operated by the 1540 Committee have been settled: - Implementation (coordinator: Germany); - Assistance (coordinator: France); - International cooperation (interim coordinator: South Africa); and - Transparency and media outreach (coordinator: USA). The status of implementation of the resolution continues to improve since 2004, much work remains to be done and the gravity of the threat remains considerable. (A.C.)
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).
A novel structure-aware sparse learning algorithm for brain imaging genetics.
Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li
2014-01-01
Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif
2016-10-01
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
Energy Technology Data Exchange (ETDEWEB)
Morhac, M. E-mail: fyzimiro@savba.skfyzimiro@flnr.jinr.ru; Matousek, V. E-mail: matousek@savba.sk; Kliman, J.; Krupa, L.L.; Jandel, M
2003-04-21
The efficient algorithms to analyze multiparameter {gamma}-ray spectra are presented. They allow to search for peaks, to separate peaks from background, to improve the resolution and to fit 1-, 2-, 3-parameter {gamma}-ray spectra.
Information content of ozone retrieval algorithms
Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.
1989-01-01
The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
A multiresolution remeshed Vortex-In-Cell algorithm using patches
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Cottet, Georges-Henri; Walther, Jens Honore
2011-01-01
We present a novel multiresolution Vortex-In-Cell algorithm using patches of varying resolution. The Poisson equation relating the fluid vorticity and velocity is solved using Fast Fourier Transforms subject to free space boundary conditions. Solid boundaries are implemented using the semi...
Deep Learning based Super-Resolution for Improved Action Recognition
DEFF Research Database (Denmark)
Nasrollahi, Kamal; Guerrero, Sergio Escalera; Rasti, Pejman
2015-01-01
with results of a state-of- the-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate...
Effects of pose and image resolution on automatic face recognition
Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.
The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm
Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.
2011-01-01
An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.
Algorithms of image processing in nuclear medicine
International Nuclear Information System (INIS)
Oliveira, V.A.
1990-01-01
The problem of image restoration from noisy measurements as encountered in Nuclear Medicine is considered. A new approach for treating the measurements wherein they are represented by a spatial noncausal interaction model prior to maximum entropy restoration is given. This model describes the statistical dependence among the image values and their neighbourhood. The particular application of the algorithms presented here relates to gamma ray imaging systems, and is aimed at improving the resolution-noise suppression product. Results for actual gamma camera data are presented and compared with more conventional techniques. (author)
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
Super-resolution for scanning light stimulation systems
Energy Technology Data Exchange (ETDEWEB)
Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de; Schmechel, R. [Faculty of Engineering, NST and CENIDE, University of Duisburg-Essen, Bismarckstr. 81, 47057 Duisburg (Germany)
2016-09-15
Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems. To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.
High resolution integral holography using Fourier ptychographic approach.
Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Liu, Delian
2014-12-29
An innovative approach is proposed for calculating high resolution computer generated integral holograms by using the Fourier Ptychographic (FP) algorithm. The approach initializes a high resolution complex hologram with a random guess, and then stitches together low resolution multi-view images, synthesized from the elemental images captured by integral imaging (II), to recover the high resolution hologram through an iterative retrieval with FP constrains. This paper begins with an analysis of the principle of hologram synthesis from multi-projections, followed by an accurate determination of the constrains required in the Fourier ptychographic integral-holography (FPIH). Next, the procedure of the approach is described in detail. Finally, optical reconstructions are performed and the results are demonstrated. Theoretical analysis and experiments show that our proposed approach can reconstruct 3D scenes with high resolution.
Temporal super resolution using variational methods
DEFF Research Database (Denmark)
Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads
2010-01-01
Temporal super resolution (TSR) is the ability to convert video from one frame rate to another and is as such a key functionality in modern video processing systems. A higher frame rate than what is recorded is desired for high frame rate displays, for super slow-motion, and for video/film format...... observed when watching video on large and bright displays where the motion of high contrast edges often seem jerky and unnatural. A novel motion compensated (MC) TSR algorithm using variational methods for both optical flow calculation and the actual new frame interpolation is presented. The flow...
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-01-01
Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely Delay-Multiply-and-Sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging...
Three-Dimensional Photoacoustic Tomography using Delay Multiply and Sum Beamforming Algorithm
Paridar, Roya; Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-01-01
Photoacoustic imaging (PAI), is a promising medical imaging technique that provides the high contrast of the optical imaging and the resolution of ultrasound (US) imaging. Among all the methods, Three-dimensional (3D) PAI provides a high resolution and accuracy. One of the most common algorithms for 3D PA image reconstruction is delay-and-sum (DAS). However, the quality of the reconstructed image obtained from this algorithm is not satisfying, having high level of sidelobes and a wide mainlob...
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2010-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This ana...
High resolution solar observations
International Nuclear Information System (INIS)
Title, A.
1985-01-01
Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself
Directory of Open Access Journals (Sweden)
Adina FOLTIŞ
2012-01-01
Full Text Available The resolution, the termination and the reduction of labour conscription are regulated by articles 1549-1554 in the new Civil Code, which represents the common law in this matter. We appreciate that the new regulation does not conclusively clarify the issue related to whether the existence of liability in order to call upon the resolution is necessary or not, because the existence of this condition has been inferred under the previous regulation from the fact that the absence of liability shifts the inexecution issue on the domain of fortuitous impossibility of execution, situation in which the resolution of the contract is not in question, but that of the risk it implies.
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay
International Nuclear Information System (INIS)
Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.
1994-01-01
A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
MRI isotropic resolution reconstruction from two orthogonal scans
Tamez-Pena, Jose G.; Totterman, Saara; Parker, Kevin J.
2001-07-01
An algorithm for the reconstructions of ISO-resolution volumetric MR data sets from two standard orthogonal MR scans having anisotropic resolution has been developed. The reconstruction algorithm starts by registering a pair of orthogonal volumetric MR data sets. The registration is done by maximizing the correlation between the gradient magnitude using a simple translation-rotation model in a multi-resolution approach. Then algorithm assumes that the individual voxels on the MR data are an average of the magnetic resonance properties of an elongated imaging volume. Then, the process is modeled as the projection of MR properties into a single sensor. This model allows the derivation of a set of linear equations that can be used to recover the MR properties of every single voxel in the SO-resolution volume given only two orthogonal MR scans. Projections on convex sets (POCS) was used to solve the set of linear equations. Experimental results show the advantage of having a ISO-resolution reconstructions for the visualization and analysis of small and thin muscular structures.
Ultra high resolution soft x-ray tomography
International Nuclear Information System (INIS)
Haddad, W.S.; Trebes, J.E.; Goodman, D.M.; Lee, H.R.; McNulty, I.; Zalensky, A.O.
1995-01-01
Ultra high resolution three dimensional images of a microscopic test object were made with soft x-rays using a scanning transmission x-ray microscope. The test object consisted of two different patterns of gold bars on silicon nitride windows that were separated by ∼5 microm. A series of nine 2-D images of the object were recorded at angles between -50 to +55 degrees with respect to the beam axis. The projections were then combined tomographically to form a 3-D image by means of an algebraic reconstruction technique (ART) algorithm. A transverse resolution of ∼ 1,000 angstrom was observed. Artifacts in the reconstruction limited the overall depth resolution to ∼ 6,000 angstrom, however some features were clearly reconstructed with a depth resolution of ∼ 1,000 angstrom. A specially modified ART algorithm and a constrained conjugate gradient (CCG) code were also developed as improvements over the standard ART algorithm. Both of these methods made significant improvements in the overall depth resolution, bringing it down to ∼ 1,200 angstrom overall. Preliminary projection data sets were also recorded with both dry and re-hydrated human sperm cells over a similar angular range
Ultra high resolution soft x-ray tomography
International Nuclear Information System (INIS)
Haddad, W.S.; Trebes, J.E.; Goodman, D.M.
1995-01-01
Ultra high resolution three dimensional images of a microscopic test object were made with soft x-rays using a scanning transmission x-ray microscope. The test object consisted of two different patterns of gold bars on silicon nitride windows that were separated by ∼5μm. A series of nine 2-D images of the object were recorded at angles between -50 to +55 degrees with respect to the beam axis. The projections were then combined tomographically to form a 3-D image by means of an algebraic reconstruction technique (ART) algorithm. A transverse resolution of ∼1000 Angstrom was observed. Artifacts in the reconstruction limited the overall depth resolution to ∼6000 Angstrom, however some features were clearly reconstructed with a depth resolution of ∼1000 Angstrom. A specially modified ART algorithm and a constrained conjugate gradient (CCG) code were also developed as improvements over the standard ART algorithm. Both of these methods made significant improvements in the overall depth resolution bringing it down to ∼1200 Angstrom overall. Preliminary projection data sets were also recorded with both dry and re-hydrated human sperm cells over a similar angular range
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Underwater video enhancement using multi-camera super-resolution
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.
Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen
2010-11-01
In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.
Image-reconstruction algorithms for positron-emission tomography systems
International Nuclear Information System (INIS)
Cheng, S.N.C.
1982-01-01
The positional uncertainty in the time-of-flight measurement of a positron-emission tomography system is modelled as a Gaussian distributed random variable and the image is assumed to be piecewise constant on a rectilinear lattice. A reconstruction algorithm using maximum-likelihood estimation is derived for the situation in which time-of-flight data are sorted as the most-likely-position array. The algorithm is formulated as a linear system described by a nonseparable, block-banded, Toeplitz matrix, and a sine-transform technique is used to implement this algorithm efficiently. The reconstruction algorithms for both the most-likely-position array and the confidence-weighted array are described by similar equations, hence similar linear systems can be used to described the reconstruction algorithm for a discrete, confidence-weighted array, when the matrix and the entries in the data array are properly identified. It is found that the mean square-error depends on the ratio of the full width at half the maximum of time-of-flight measurement over the size of a pixel. When other parameters are fixed, the larger the pixel size, the smaller is the mean square-error. In the study of resolution, parameters that affect the impulse response of time-of-flight reconstruction algorithms are identified. It is found that the larger the pixel size, the larger is the standard deviation of the impulse response. This shows that small mean square-error and fine resolution are two contradictory requirements
A new cut-based algorithm for the multi-state flow network reliability problem
International Nuclear Information System (INIS)
Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling
2015-01-01
Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms
Directory of Open Access Journals (Sweden)
Behnam Barzegar
2012-01-01
Full Text Available Scheduled production system leads to avoiding stock accumulations, losses reduction, decreasing or even eliminating idol machines, and effort to better benefitting from machines for on time responding customer orders and supplying requested materials in suitable time. In flexible job-shop scheduling production systems, we could reduce time and costs by transferring and delivering operations on existing machines, that is, among NP-hard problems. The scheduling objective minimizes the maximal completion time of all the operations, which is denoted by Makespan. Different methods and algorithms have been presented for solving this problem. Having a reasonable scheduled production system has significant influence on improving effectiveness and attaining to organization goals. In this paper, new algorithm were proposed for flexible job-shop scheduling problem systems (FJSSP-GSPN that is based on gravitational search algorithm (GSA. In the proposed method, the flexible job-shop scheduling problem systems was modeled by color Petri net and CPN tool and then a scheduled job was programmed by GSA algorithm. The experimental results showed that the proposed method has reasonable performance in comparison with other algorithms.
Improved Passive Microwave Algorithms for North America and Eurasia
Foster, James; Chang, Alfred; Hall, Dorothy
1997-01-01
Microwave algorithms simplify complex physical processes in order to estimate geophysical parameters such as snow cover and snow depth. The microwave radiances received at the satellite sensor and expressed as brightness temperatures are a composite of contributions from the Earth's surface, the Earth's atmosphere and from space. Owing to the coarse resolution inherent to passive microwave sensors, each pixel value represents a mixture of contributions from different surface types including deep snow, shallow snow, forests and open areas. Algorithms are generated in order to resolve these mixtures. The accuracy of the retrieved information is affected by uncertainties in the assumptions used in the radiative transfer equation (Steffen et al., 1992). One such uncertainty in the Chang et al., (1987) snow algorithm is that the snow grain radius is 0.3 mm for all layers of the snowpack and for all physiographic regions. However, this is not usually the case. The influence of larger grain sizes appears to be of more importance for deeper snowpacks in the interior of Eurasia. Based on this consideration and the effects of forests, a revised SMMR snow algorithm produces more realistic snow mass values. The purpose of this study is to present results of the revised algorithm (referred to for the remainder of this paper as the GSFC 94 snow algorithm) which incorporates differences in both fractional forest cover and snow grain size. Results from the GSFC 94 algorithm will be compared to the original Chang et al. (1987) algorithm and to climatological snow depth data as well.
Wavelet-LMS algorithm-based echo cancellers
Seetharaman, Lalith K.; Rao, Sathyanarayana S.
2002-12-01
This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
EUV lithography for 30nm half pitch and beyond: exploring resolution, sensitivity, and LWR tradeoffs
Putna, E. Steve; Younkin, Todd R.; Chandhok, Manish; Frasure, Kent
2009-03-01
The International Technology Roadmap for Semiconductors (ITRS) denotes Extreme Ultraviolet (EUV) lithography as a leading technology option for realizing the 32nm half-pitch node and beyond. Readiness of EUV materials is currently one high risk area according to assessments made at the 2008 EUVL Symposium. The main development issue regarding EUV resist has been how to simultaneously achieve high sensitivity, high resolution, and low line width roughness (LWR). This paper describes the strategy and current status of EUV resist development at Intel Corporation. Data is presented utilizing Intel's Micro-Exposure Tool (MET) examining the feasibility of establishing a resist process that simultaneously exhibits <=30nm half-pitch (HP) L/S resolution at <=10mJ/cm2 with <=4nm LWR.
EUV lithography for 22nm half pitch and beyond: exploring resolution, LWR, and sensitivity tradeoffs
Putna, E. Steve; Younkin, Todd R.; Leeson, Michael; Caudillo, Roman; Bacuita, Terence; Shah, Uday; Chandhok, Manish
2011-04-01
The International Technology Roadmap for Semiconductors (ITRS) denotes Extreme Ultraviolet (EUV) lithography as a leading technology option for realizing the 22nm half pitch node and beyond. According to recent assessments made at the 2010 EUVL Symposium, the readiness of EUV materials remains one of the top risk items for EUV adoption. The main development issue regarding EUV resists has been how to simultaneously achieve high resolution, high sensitivity, and low line width roughness (LWR). This paper describes our strategy, the current status of EUV materials, and the integrated post-development LWR reduction efforts made at Intel Corporation. Data collected utilizing Intel's Micro- Exposure Tool (MET) is presented in order to examine the feasibility of establishing a resist process that simultaneously exhibits <=22nm half-pitch (HP) L/S resolution at <=11.3mJ/cm2 with <=3nm LWR.
Energy Technology Data Exchange (ETDEWEB)
Wen, Xianfei; Yang, Haori
2015-06-01
A major challenge in utilizing spectroscopy techniques for nuclear safeguards is to perform high-resolution measurements at an ultra-high throughput rate. Traditionally, piled-up pulses are rejected to ensure good energy resolution. To improve throughput rate, high-pass filters are normally implemented to shorten pulses. However, this reduces signal-to-noise ratio and causes degradation in energy resolution. In this work, a pulse pile-up recovery algorithm based on template-matching was proved to be an effective approach to achieve high-throughput gamma ray spectroscopy. First, a discussion of the algorithm was given in detail. Second, the algorithm was then successfully utilized to process simulated piled-up pulses from a scintillator detector. Third, the algorithm was implemented to analyze high rate data from a NaI detector, a silicon drift detector and a HPGe detector. The promising results demonstrated the capability of this algorithm to achieve high-throughput rate without significant sacrifice in energy resolution. The performance of the template-matching algorithm was also compared with traditional shaping methods. - Highlights: • A detailed discussion on the template-matching algorithm was given. • The algorithm was tested on data from a NaI and a Si detector. • The algorithm was successfully implemented on high rate data from a HPGe detector. • The performance of the algorithm was compared with traditional shaping methods. • The advantage of the algorithm in active interrogation was discussed.
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
High resolution drift chambers
International Nuclear Information System (INIS)
Va'vra, J.
1985-07-01
High precision drift chambers capable of achieving less than or equal to 50 μm resolutions are discussed. In particular, we compare so called cool and hot gases, various charge collection geometries, several timing techniques and we also discuss some systematic problems. We also present what we would consider an ''ultimate'' design of the vertex chamber. 50 refs., 36 figs., 6 tabs
David E. Nagel; John M. Buffington; Sharon L. Parkes; Seth Wenger; Jaime R. Goode
2014-01-01
Valley confinement is an important landscape characteristic linked to aquatic habitat, riparian diversity, and geomorphic processes. This report describes a GIS program called the Valley Confinement Algorithm (VCA), which identifies unconfined valleys in montane landscapes. The algorithm uses nationally available digital elevation models (DEMs) at 10-30 m resolution to...
Algorithm of hadron energy reconstruction for combined calorimeters in the DELPHI detector
International Nuclear Information System (INIS)
Gotra, Yu.N.; Tsyganov, E.N.; Zimin, N.I.; Zinchenko, A.I.
1989-01-01
The algorithm of hadron energy reconstruction from responses of electromagnetic and hadron calorimeters is described. The investigations have been carried out using the full-scale prototype of the hadron calorimeter cylindrical part modules. The supposed algorithm allows one to improve energy resolution by 5-7% with conserving the linearly of reconstructed hadron energy. 5 refs.; 4 figs.; 1 tab
Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.
2000-01-01
A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data
A Denotational Account of Untyped Normalization by Evaluation
DEFF Research Database (Denmark)
Filinski, Andrzej; Rohde, Henning Korsholm
2004-01-01
Abstract. We show that the standard normalization-by-evaluation construction for the simply-typed λβη-calculus has a natural counterpart for the untyped λβ-calculus, with the central type-indexed logical relation replaced by a “recursively defined” invariant relation, in the style of Pitts. In fact......, the construction can be seen as generalizing a computational adequacy argument for an untyped, call-by-name language to normalization instead of evaluation. In the untyped setting, not all terms have normal forms, so the normalization function is necessarily partial. We establish its correctness in the senses...
A Denotational Account of Untyped Normalization by Evaluation
DEFF Research Database (Denmark)
Filinski, Andrzej; Rohde, Henning Korsholm
2004-01-01
We show that the standard normalization-by-evaluation construction for the simply-typed λβγ-calculus has a natural counterpart for the untyped λβ-calculus, with the central type-indexed logical relation replaced by a recursively defined invariant relation, in the style of Pitts. In fact, the cons...
Denotational, Causal, and Operational Determinism in Event Structures
Rensink, Arend; Kirchner, H.
1996-01-01
Determinism of labelled transition systems and trees is a concept of theoretical and practical importance. We study its generalisation to event structures. It turns out that the result depends on what characterising property of tree determinism one sets out to generalise. We present three distinct
Denotational, Causal, and Operational Determinism in Event Structures
Rensink, Arend
Determinism is a theoretically and practically important concept in labelled transition systems and trees. We study its generalisation to event structures. It turns out that the result depends on what characterising property of tree determinism one sets out to generalise. We present three distinct
Denotational World-indexed Logical Relations and Friends
DEFF Research Database (Denmark)
Thamsborg, Jacob Junker
and solve the fundamental type-worlds circularity by metric-space theory. This approach scales to state-of-theart step-indexed techniques and permits unrestricted relational reasoning by the use of so-called Bohr relations. Along the way, we develop auxiliary theory; most notably a generalized version...... of separation logic with assertion variables. In particular, we give criteria for when standard, unary separation logic proofs lift to the binary setting. Phrased dierently, given a module-dependent client and a standard separation logic proof of its correctness, we ponder the default question of representation...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Fast weighted centroid algorithm for single particle localization near the information limit.
Fish, Jeremie; Scrimgeour, Jan
2015-07-10
A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.
Windowed time-reversal music technique for super-resolution ultrasound imaging
Huang, Lianjie; Labyed, Yassin
2018-05-01
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
International Nuclear Information System (INIS)
Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao
2014-01-01
Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. (paper)
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Research on compressive sensing reconstruction algorithm based on total variation model
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
In Vivo EPR Resolution Enhancement Using Techniques Known from Quantum Computing Spin Technology.
Rahimi, Robabeh; Halpern, Howard J; Takui, Takeji
2017-01-01
A crucial issue with in vivo biological/medical EPR is its low signal-to-noise ratio, giving rise to the low spectroscopic resolution. We propose quantum hyperpolarization techniques based on 'Heat Bath Algorithmic Cooling', allowing possible approaches for improving the resolution in magnetic resonance spectroscopy and imaging.
Multi-example feature-constrained back-projection method for image super-resolution
Institute of Scientific and Technical Information of China (English)
Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li
2017-01-01
Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.
A Modularity Degree Based Heuristic Community Detection Algorithm
Directory of Open Access Journals (Sweden)
Dongming Chen
2014-01-01
Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.
Directory of Open Access Journals (Sweden)
Mengxue Liu
2018-05-01
Full Text Available Satellite data for studying surface dynamics in heterogeneous landscapes are missing due to frequent cloud contamination, low temporal resolution, and technological difficulties in developing satellites. A modified spatiotemporal fusion algorithm for predicting the reflectance of paddy rice is presented in this paper. The algorithm uses phenological information extracted from a moderate-resolution imaging spectroradiometer enhanced vegetation index time series to improve the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM. The algorithm is tested with satellite data on Yueyang City, China. The main contribution of the modified algorithm is the selection of similar neighborhood pixels by using phenological information to improve accuracy. Results show that the modified algorithm performs better than ESTARFM in visual inspection and quantitative metrics, especially for paddy rice. This modified algorithm provides not only new ideas for the improvement of spatiotemporal data fusion method, but also technical support for the generation of remote sensing data with high spatial and temporal resolution.
Low-resolution simulations of vesicle suspensions in 2D
Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George
2018-03-01
Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
High resolution data acquisition
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
High resolution photoelectron spectroscopy
International Nuclear Information System (INIS)
Arko, A.J.
1988-01-01
Photoelectron Spectroscopy (PES) covers a very broad range of measurements, disciplines, and interests. As the next generation light source, the FEL will result in improvements over the undulator that are larger than the undulater improvements over bending magnets. The combination of high flux and high inherent resolution will result in several orders of magnitude gain in signal to noise over measurements using synchrotron-based undulators. The latter still require monochromators. Their resolution is invariably strongly energy-dependent so that in the regions of interest for many experiments (h upsilon > 100 eV) they will not have a resolving power much over 1000. In order to study some of the interesting phenomena in actinides (heavy fermions e.g.) one would need resolving powers of 10 4 to 10 5 . These values are only reachable with the FEL
Particle detector spatial resolution
International Nuclear Information System (INIS)
Perez-Mendez, V.
1992-01-01
Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution is disclosed. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector. 12 figs
Resolution-recovery-embedded image reconstruction for a high-resolution animal SPECT system.
Zeraatkar, Navid; Sajedi, Salar; Farahani, Mohammad Hossein; Arabi, Hossein; Sarkar, Saeed; Ghafarian, Pardis; Rahmim, Arman; Ay, Mohammad Reza
2014-11-01
The small-animal High-Resolution SPECT (HiReSPECT) is a dedicated dual-head gamma camera recently designed and developed in our laboratory for imaging of murine models. Each detector is composed of an array of 1.2 × 1.2 mm(2) (pitch) pixelated CsI(Na) crystals. Two position-sensitive photomultiplier tubes (H8500) are coupled to each head's crystal. In this paper, we report on a resolution-recovery-embedded image reconstruction code applicable to the system and present the experimental results achieved using different phantoms and mouse scans. Collimator-detector response functions (CDRFs) were measured via a pixel-driven method using capillary sources at finite distances from the head within the field of view (FOV). CDRFs were then fitted by independent Gaussian functions. Thereafter, linear interpolations were applied to the standard deviation (σ) values of the fitted Gaussians, yielding a continuous map of CDRF at varying distances from the head. A rotation-based maximum-likelihood expectation maximization (MLEM) method was used for reconstruction. A fast rotation algorithm was developed to rotate the image matrix according to the desired angle by means of pre-generated rotation maps. The experiments demonstrated improved resolution utilizing our resolution-recovery-embedded image reconstruction. While the full-width at half-maximum (FWHM) radial and tangential resolution measurements of the system were over 2 mm in nearly all positions within the FOV without resolution recovery, reaching around 2.5 mm in some locations, they fell below 1.8 mm everywhere within the FOV using the resolution-recovery algorithm. The noise performance of the system was also acceptable; the standard deviation of the average counts per voxel in the reconstructed images was 6.6% and 8.3% without and with resolution recovery, respectively. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Czech Academy of Sciences Publication Activity Database
Bonacina, I.; Galesi, N.; Thapen, Neil
2016-01-01
Roč. 45, č. 5 (2016), s. 1894-1909 ISSN 0097-5397 R&D Projects: GA ČR GBP202/12/G061 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : total space * resolution random CNFs * proof complexity Subject RIV: BA - General Mathematics Impact factor: 1.433, year: 2016 http://epubs.siam.org/doi/10.1137/15M1023269
High resolution (transformers.
Garcia-Souto, Jose A; Lamela-Rivera, Horacio
2006-10-16
A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.
ALTERNATIVE DISPUTE RESOLUTION
Directory of Open Access Journals (Sweden)
Mihaela Irina IONESCU
2016-05-01
Full Text Available Alternative dispute resolution (ADR includes dispute resolution processes and techniques that act as a means for disagreeing parties to come to an agreement short of litigation. It is a collective term for the ways that parties can settle disputes, with (or without the help of a third party. Despite historic resistance to ADR by many popular parties and their advocates, ADR has gained widespread acceptance among both the general public and the legal profession in recent years. In fact, some courts now require some parties to resort to ADR of some type, before permitting the parties' cases to be tried. The rising popularity of ADR can be explained by the increasing caseload of traditional courts, the perception that ADR imposes fewer costs than litigation, a preference for confidentiality, and the desire of some parties to have greater control over the selection of the individual or individuals who will decide their dispute. Directive 2013/11/EU of the European Parliament and of the Council on alternative dispute resolution for consumer disputes and amending Regulation (EC No 2006/2004 and Directive 2009/22/EC (hereinafter „Directive 2013/11/EU” aims to ensure a high level of consumer protection and the proper functioning of the internal market by ensuring that complaints against traders can be submitted by consumers on a voluntary basis, to entities of alternative disputes which are independent, impartial, transparent, effective, simple,quick and fair. Directive 2013/11/EU establishes harmonized quality requirements for entities applying alternative dispute resolution procedure (hereinafter "ADR entity" to provide the same protection and the same rights of consumers in all Member States. Besides this, the present study is trying to present broadly how are all this trasposed in the romanian legislation.
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
Energy Technology Data Exchange (ETDEWEB)
Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)
2014-05-15
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
International Nuclear Information System (INIS)
Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib
2014-01-01
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution
Isotope specific resolution recovery image reconstruction in high resolution PET imaging.
Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib
2014-05-01
Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quantitative validation of a new coregistration algorithm
International Nuclear Information System (INIS)
Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.
1995-01-01
A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
Performance of a high resolution cavity beam position monitor system
Walston, Sean; Boogert, Stewart; Chung, Carl; Fitsos, Pete; Frisch, Joe; Gronberg, Jeff; Hayano, Hitoshi; Honda, Yosuke; Kolomensky, Yury; Lyapin, Alexey; Malton, Stephen; May, Justin; McCormick, Douglas; Meller, Robert; Miller, David; Orimoto, Toyoko; Ross, Marc; Slater, Mark; Smith, Steve; Smith, Tonee; Terunuma, Nobuhiro; Thomson, Mark; Urakawa, Junji; Vogel, Vladimir; Ward, David; White, Glen
2007-07-01
It has been estimated that an RF cavity Beam Position Monitor (BPM) could provide a position measurement resolution of less than 1 nm. We have developed a high resolution cavity BPM and associated electronics. A triplet comprised of these BPMs was installed in the extraction line of the Accelerator Test Facility (ATF) at the High Energy Accelerator Research Organization (KEK) for testing with its ultra-low emittance beam. The three BPMs were each rigidly mounted inside an alignment frame on six variable-length struts which could be used to move the BPMs in position and angle. We have developed novel methods for extracting the position and tilt information from the BPM signals including a robust calibration algorithm which is immune to beam jitter. To date, we have demonstrated a position resolution of 15.6 nm and a tilt resolution of 2.1 μrad over a dynamic range of approximately ±20 μm.
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
International Nuclear Information System (INIS)
Bunyamin, Muhammad Afif; Yap, Keem Siah; Aziz, Nur Liyana Afiqah Abdul; Tiong, Sheih Kiong; Wong, Shen Yuong; Kamal, Md Fauzan
2013-01-01
This paper presents a new approach of gas emission estimation in power generation plant using a hybrid Genetic Algorithm (GA) and Linear Regression (LR) (denoted as GA-LR). The LR is one of the approaches that model the relationship between an output dependant variable, y, with one or more explanatory variables or inputs which denoted as x. It is able to estimate unknown model parameters from inputs data. On the other hand, GA is used to search for the optimal solution until specific criteria is met causing termination. These results include providing good solutions as compared to one optimal solution for complex problems. Thus, GA is widely used as feature selection. By combining the LR and GA (GA-LR), this new technique is able to select the most important input features as well as giving more accurate prediction by minimizing the prediction errors. This new technique is able to produce more consistent of gas emission estimation, which may help in reducing population to the environment. In this paper, the study's interest is focused on nitrous oxides (NOx) prediction. The results of the experiment are encouraging.
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Directory of Open Access Journals (Sweden)
Walton Surrey M
2005-03-01
Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the
Directory of Open Access Journals (Sweden)
Arazi Idrus
2017-12-01
Full Text Available In this paper, we present our work-in-progress of a proposed framework for automated negotiation in the construction domain. The proposed framework enables software agents to conduct negotiations and autonomously make value-based decisions. The framework consists of three main components which are, solution generator algorithm, negotiation algorithm, and conflict resolution algorithm. This paper extends the discussion on the solution generator algorithm that enables software agents to generate solutions and rank them from 1st to nth solution for the negotiation stage of the operation. The solution generator algorithm consists of three steps which are, review solutions, rank solutions, and form ranked solutions. For validation purpose, we present a scenario that utilizes the proposed algorithm to rank solutions. The validation shows that the algorithm is promising, however, it also highlights the conflict between different parties that needs further negotiation action.
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms
Buse, Gerrit
2012-06-01
The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.
Resolution enhancement in medical ultrasound imaging.
Ploquin, Marie; Basarab, Adrian; Kouamé, Denis
2015-01-01
Image resolution enhancement is a problem of considerable interest in all medical imaging modalities. Unlike general purpose imaging or video processing, for a very long time, medical image resolution enhancement has been based on optimization of the imaging devices. Although some recent works purport to deal with image postprocessing, much remains to be done regarding medical image enhancement via postprocessing, especially in ultrasound imaging. We face a resolution improvement issue in the case of medical ultrasound imaging. We propose to investigate this problem using multidimensional autoregressive (AR) models. Noting that the estimation of the envelope of an ultrasound radio frequency (RF) signal is very similar to the estimation of classical Fourier-based power spectrum estimation, we theoretically show that a domain change and a multidimensional AR model can be used to achieve super-resolution in ultrasound imaging provided the order is estimated correctly. Here, this is done by means of a technique that simultaneously estimates the order and the parameters of a multidimensional model using relevant regression matrix factorization. Doing so, the proposed method specifically fits ultrasound imaging and provides an estimated envelope. Moreover, an expression that links the theoretical image resolution to both the image acquisition features (such as the point spread function) and a postprocessing feature (the AR model) order is derived. The overall contribution of this work is threefold. First, it allows for automatic resolution improvement. Through a simple model and without any specific manual algorithmic parameter tuning, as is used in common methods, the proposed technique simply and exclusively uses the ultrasound RF signal as input and provides the improved B-mode as output. Second, it allows for the a priori prediction of the improvement in resolution via the knowledge of the parametric model order before actual processing. Finally, to achieve the
Optimal reservoir operation policies using novel nested algorithms
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested
DEFF Research Database (Denmark)
N. Gordon, Jeffery; Ringe, Georg
2015-01-01
This chapter argues that the work of the European Banking Union remains incomplete in one important respect, the structural re-organization of large European financial firms that would make “resolution” of a systemically important financial firm a credible alternative to bail-out or some other sort...... of taxpayer assistance. A holding company structure in which the public parent holds unsecured term debt sufficient to cover losses at an operating financial subsidiary would facilitate a “Single Point of Entry” resolution procedure that would minimize knock-on effects from the failure of a systemically...
DEFF Research Database (Denmark)
Gordon, Jeffrey N.; Ringe, Georg
This chapter argues that the work of the European Banking Union remains incomplete in one important respect, the structural re-organization of large European financial firms that would make “resolution” of a systemically important financial firm a credible alternative to bail-out or some other sort...... of taxpayer assistance. A holding company structure in which the public parent holds unsecured term debt sufficient to cover losses at an operating financial subsidiary would facilitate a “Single Point of Entry” resolution procedure that would minimize knock-on effects from the failure of a systemically...
High resolution backscattering instruments
International Nuclear Information System (INIS)
Coldea, R.
2001-01-01
The principle of operation of indirect-geometry time-of-flight spectrometers are presented, including the IRIS at the ISIS spallation neutron source. The key features that make those types of spectrometers ideally suited for low-energy spectroscopy are: high energy resolution over a wide dynamic range, and simultaneous measurement over a large momentum transfer range provided by the wide angular detector coverage. To exemplify these features are discussed of single-crystal experiments of the spin dynamics in the two-dimensional frustrated quantum magnet Cs 2 CuCl 4 . (R.P.)
Failure Diameter Resolution Study
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-19
Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).
Conflict management and resolution.
Harolds, Jay; Wood, Beverly P
2006-03-01
When people work collaboratively, conflict will always arise. Understanding the nature and source of conflict and its progression and stages, resolution, and outcome is a vital aspect of leadership. Causes of conflict include the miscomprehension of communication, emotional issues, personal history, and values. When the difference is understood and the resultant behavior properly addressed, most conflict can be settled in a way that provides needed change in an organization and interrelationships. There are serious consequences of avoiding or mismanaging disagreements. Informed leaders can effectively prevent destructive conflicts.
A fast algorithm for computer aided collimation gamma camera (CACAO)
Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.
2000-08-01
The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.
Improved algorithm for surface display from volumetric data
International Nuclear Information System (INIS)
Lobregt, S.; Schaars, H.W.G.K.; OpdeBeek, J.C.A.; Zonneveld, F.W.
1988-01-01
A high-resolution surface display is produced from three-dimensional datasets (computed tomography or magnetic resonance imaging). Unlike other voxel-based methods, this algorithm does not show a cuberille surface structure, because the surface orientation is calculated from original gray values. The applied surface shading is a function of local orientation and position of the surface and of a virtual light source, giving a realistic impression of the surface of bone and soft tissue. The projection and shading are table driven, combining variable viewpoint and illumination conditions with speed. Other options are cutplane gray-level display and surface transparency. Combined with volume scanning, this algorithm offers powerful application possibilities
Won, Rachel
2018-05-01
In the quest for nanoscopy with super-resolution, consensus from the imaging community is that super-resolution is not always needed and that scientists should choose an imaging technique based on their specific application.
Mojica, Edson; Pertuz, Said; Arguello, Henry
2017-12-01
One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.
SINGLE FRAME SUPER RESOLUTION OF NONCOOPERATIVE IRIS IMAGES
Directory of Open Access Journals (Sweden)
Anand Deshpande
2016-11-01
Full Text Available Image super-resolution, a process to enhance image resolution, has important applications in biometrics, satellite imaging, high definition television, medical imaging, etc. The long range captured iris identification systems often suffer from low resolution and meager focus of the captured iris images. These degrade the iris recognition performance. This paper proposes enhanced iterated back projection (EIBP method to super resolute the long range captured iris polar images. The performance of proposed method is tested and analyzed on CASIA long range iris database by comparing peak signal to noise ratio (PSNR and structural similarity index (SSIM with state-of-the-art super resolution (SR algorithms. It is further analyzed by increasing the up-sampling factor. Performance analysis shows that the proposed method is superior to state-of-the-art algorithms, the peak signal-to-noise ratio improved about 0.1-1.5 dB. The results demonstrate that the proposed method is well suited to super resolve the iris polar images captured at a long distance
Automating the conflict resolution process
Wike, Jeffrey S.
1991-01-01
The purpose is to initiate a discussion of how the conflict resolution process at the Network Control Center can be made more efficient. Described here are how resource conflicts are currently resolved as well as the impacts of automating conflict resolution in the ATDRSS era. A variety of conflict resolution strategies are presented.
Directory of Open Access Journals (Sweden)
Michael Woelfle
2011-09-01
Full Text Available BACKGROUND: Praziquantel remains the drug of choice for the worldwide treatment and control of schistosomiasis. The drug is synthesized and administered as a racemate. Use of the pure active enantiomer would be desirable since the inactive enantiomer is associated with side effects and is responsible for the extremely bitter taste of the pill. METHODOLOGY/PRINCIPAL FINDINGS: We have identified two resolution approaches toward the production of praziquantel as a single enantiomer. One approach starts with commercially available praziquantel and involves a hydrolysis to an intermediate amine, which is resolved with a derivative of tartaric acid. This method was discovered through an open collaboration on the internet. The second method, identified by a contract research organisation, employs a different intermediate that may be resolved with tartaric acid itself. CONCLUSIONS/SIGNIFICANCE: Both resolution procedures identified show promise for the large-scale, economically viable production of praziquantel as a single enantiomer for a low price. Additionally, they may be employed by laboratories for the production of smaller amounts of enantiopure drug for research purposes that should be useful in, for example, elucidation of the drug's mechanism of action.
High resolution hadron calorimetry
International Nuclear Information System (INIS)
Wigmans, R.
1987-01-01
The components that contribute to the signal of a hadron calorimeter and the factors that affect its performance are discussed, concentrating on two aspects; energy resolution and signal linearity. Both are decisively dependent on the relative response to the electromagnetic and the non-electromagnetic shower components, the e/h signal ratio, which should be equal to 1.0 for optimal performance. The factors that determine the value of this ratio are examined. The calorimeter performance is crucially determined by its response to the abundantly present soft neutrons in the shower. The presence of a considerable fraction of hydrogen atoms in the active medium is essential for achieving the best possible results. Firstly, this allows one to tune e/h to the desired value by choosing the appropriate sampling fraction. And secondly, the efficient neutron detection via recoil protons in the readout medium itself reduces considerably the effect of fluctuations in binding energy losses at the nuclear level, which dominate the intrinsic energy resolution. Signal equalization, or compensation (e/h = 1.0) does not seem to be a property unique to 238 U, but can also be achieved with lead and probably even iron absorbers. 21 refs.; 19 figs
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
A cloud masking algorithm for EARLINET lidar systems
Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina
2015-04-01
Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.
The semianalytical cloud retrieval algorithm for SCIAMACHY I. The validation
Directory of Open Access Journals (Sweden)
A. A. Kokhanovsky
2006-01-01
Full Text Available A recently developed cloud retrieval algorithm for the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY is briefly presented and validated using independent and well tested cloud retrieval techniques based on the look-up-table approach for MODeration resolutIon Spectrometer (MODIS data. The results of the cloud top height retrievals using measurements in the oxygen A-band by an airborne crossed Czerny-Turner spectrograph and the Global Ozone Monitoring Experiment (GOME instrument are compared with those obtained from airborne dual photography and retrievals using data from Along Track Scanning Radiometer (ATSR-2, respectively.
Solutions on high-resolution multiple configuration system sensors
Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei
2014-11-01
For aim to achieve an improved resolution in modern image domain, a method of continuous zoom multiple configuration, with a core optics is attempt to establish model by novel principle on energy transfer and high accuracy localization, by which the system resolution can be improved with a level in nano meters. A comparative study on traditional vs modern methods can demonstrate that the dialectical relationship and their balance is important, among Merit function, Optimization algorithms and Model parameterization. The effect of system evaluated criterion that MTF, REA, RMS etc. can support our arguments qualitatively.
High spatial resolution CT image reconstruction using parallel computing
International Nuclear Information System (INIS)
Yin Yin; Liu Li; Sun Gongxing
2003-01-01
Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
Xu, Xiaodong; Zhou, Xiaolin
2016-02-01
This study investigated how topic shift and topic continuation influence pronoun interpretation in Chinese. ERPs recorded on pronouns in topic structure showed stronger and earlier late positive responses (P600) for the topic-shift than for the topic-continuation conditions. However, in nontopic structure where the subject (denoting only subjecthood), rather than the topic (denoting both topichood and subjecthood), acted as the antecedent of the pronoun, almost indistinguishable P600 responses were obtained on the pronoun regardless of whether it was referring to the subject (i.e., subject continuation) or the object (i.e., subject shift). Moreover, stronger and earlier P600 responses were elicited by pronouns in the topic-shift than in the subject-shift conditions, although there was no difference between the topic-continuation and the subject-continuation conditions. These findings suggest that topic shift results in greater difficulty in the resolution stage of referential processing, although the bonding process is not sensitive to the manipulation of topic status, and that topic has a privileged cognitive status relative to other nontopic entities (e.g., subject) in real-time language processing. © 2015 Society for Psychophysiological Research.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
Frequency-domain imaging algorithm for ultrasonic testing by application of matrix phased arrays
Directory of Open Access Journals (Sweden)
Dolmatov Dmitry
2017-01-01
Full Text Available Constantly increasing demand for high-performance materials and systems in aerospace industry requires advanced methods of nondestructive testing. One of the most promising methods is ultrasonic imaging by using matrix phased arrays. This technique allows to create three-dimensional ultrasonic imaging with high lateral resolution. Further progress in matrix phased array ultrasonic testing is determined by the development of fast imaging algorithms. In this article imaging algorithm based on frequency domain calculations is proposed. This approach is computationally efficient in comparison with time domain algorithms. Performance of the proposed algorithm was tested via computer simulations for planar specimen with flat bottom holes.
Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm
International Nuclear Information System (INIS)
Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.
2006-01-01
Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time
Time reversal and phase coherent music techniques for super-resolution ultrasound imaging
Huang, Lianjie; Labyed, Yassin
2018-05-01
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.
Two-energy twin image removal in atomic-resolution x-ray holography
International Nuclear Information System (INIS)
Nishino, Y.; Ishikawa, T.; Hayashi, K.; Takahashi, Y.; Matsubara, E.
2002-01-01
We propose a two-energy twin image removal algorithm for atomic-resolution x-ray holography. The validity of the algorithm is shown in a theoretical simulation and in an experiment of internal detector x-ray holography using a ZnSe single crystal. The algorithm, compared to the widely used multiple-energy algorithm, allows efficient measurement of holograms, and is especially important when the available x-ray energies are fixed. It enables twin image free holography using characteristic x rays from laboratory generators and x-ray pulses of free-electron lasers
Sensitivity and Resolution Improvement in RGBW Color Filter Array Sensor
Directory of Open Access Journals (Sweden)
Seunghoon Jee
2018-05-01
Full Text Available Recently, several red-green-blue-white (RGBW color filter arrays (CFAs, which include highly sensitive W pixels, have been proposed. However, RGBW CFA patterns suffer from spatial resolution degradation owing to the sensor composition having more color components than the Bayer CFA pattern. RGBW CFA demosaicing methods reconstruct resolution using the correlation between white (W pixels and pixels of other colors, which does not improve the red-green-blue (RGB channel sensitivity to the W channel level. In this paper, we thus propose a demosaiced image post-processing method to improve the RGBW CFA sensitivity and resolution. The proposed method decomposes texture components containing image noise and resolution information. The RGB channel sensitivity and resolution are improved through updating the W channel texture component with those of RGB channels. For this process, a cross multilateral filter (CMF is proposed. It decomposes the smoothness component from the texture component using color difference information and distinguishes color components through that information. Moreover, it decomposes texture components, luminance noise, color noise, and color aliasing artifacts from the demosaiced images. Finally, by updating the texture of the RGB channels with the W channel texture components, the proposed algorithm improves the sensitivity and resolution. Results show that the proposed method is effective, while maintaining W pixel resolution characteristics and improving sensitivity from the signal-to-noise ratio value by approximately 4.5 dB.
Directory of Open Access Journals (Sweden)
Lingyun Li
2013-01-01
Full Text Available We provide a new gossip algorithm to investigate the problem of opinion consensus with the time-varying influence factors and weakly connected graph among multiple agents. What is more, we discuss not only the effect of the time-varying factors and the randomized topological structure but also the spread of misinformation and communication constrains described by probabilistic quantized communication in the social network. Under the underlying weakly connected graph, we first denote that all opinion states converge to a stochastic consensus almost surely; that is, our algorithm indeed achieves the consensus with probability one. Furthermore, our results show that the mean of all the opinion states converges to the average of the initial states when time-varying influence factors satisfy some conditions. Finally, we give a result about the square mean error between the dynamic opinion states and the benchmark without quantized communication.
Multisensor data fusion algorithm development
Energy Technology Data Exchange (ETDEWEB)
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
One improved LSB steganography algorithm
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Graph Algorithm Animation with Grrr
Rodgers, Peter; Vidal, Natalia
2000-01-01
We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
Genetic Algorithms: A New Method for Neutron Beam Spectral Characterization
International Nuclear Information System (INIS)
David W. Freeman
2000-01-01
A revolutionary new concept for solving the neutron spectrum unfolding problem using genetic algorithms (GAs) has recently been introduced. GAs are part of a new field of evolutionary solution techniques that mimic living systems with computer-simulated chromosome solutions that mate, mutate, and evolve to create improved solutions. The original motivation for the research was to improve spectral characterization of neutron beams associated with boron neutron capture therapy (BNCT). The GA unfolding technique has been successfully applied to problems with moderate energy resolution (up to 47 energy groups). Initial research indicates that the GA unfolding technique may well be superior to popular unfolding methods in common use. Research now under way at Kansas State University is focused on optimizing the unfolding algorithm and expanding its energy resolution to unfold detailed beam spectra based on multiple foil measurements. Indications are that the final code will significantly outperform current, state-of-the-art codes in use by the scientific community