Regular network model for the sea ice-albedo feedback in the Arctic.
Müller-Stoffels, Marc; Wackerbauer, Renate
2011-03-01
The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.
Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-03-01
Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.
Motion-aware temporal regularization for improved 4D cone-beam computed tomography
Mory, Cyril; Janssens, Guillaume; Rit, Simon
2016-09-01
Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.
International Nuclear Information System (INIS)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated
International Nuclear Information System (INIS)
Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P
2012-01-01
The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)
Development of regularized expectation maximization algorithms for fan-beam SPECT data
International Nuclear Information System (INIS)
Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min
2005-01-01
SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions
Han, Hao; Gao, Hao; Xing, Lei
2017-08-01
Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.
Li, Xiaomei; Luo, Lan; Cai, Ying; Yang, Wenjiao; Lin, Lisha; Li, Zi; Gao, Na; Purcell, Steven W; Wu, Mingyi; Zhao, Jinhua
2017-10-25
Edible sea cucumbers are widely used as a health food and medicine. A fucosylated glycosaminoglycan (FG) was purified from the high-value sea cucumber Stichopus herrmanni. Its physicochemical properties and structure were analyzed and characterized by chemical and instrumental methods. Chemical analysis indicated that this FG with a molecular weight of ∼64 kDa is composed of N-acetyl-d-galactosamine, d-glucuronic acid (GlcA), and l-fucose. Structural analysis clarified that the FG contains the chondroitin sulfate E-like backbone, with mostly 2,4-di-O-sulfated (85%) and some 3,4-di-O-sulfated (10%) and 4-O-sulfated (5%) fucose side chains that link to the C3 position of GlcA. This FG is structurally highly regular and homogeneous, differing from the FGs of other sea cucumbers, for its sulfation patterns are simpler. Biological activity assays indicated that it is a strong anticoagulant, inhibiting thrombin and intrinsic factor Xase. Our results expand the knowledge on structural types of FG and illustrate its biological activity as a functional food material.
Leaving School — learning at SEA: Regular high school education alongside polar research
Gatti, Susanne
2010-05-01
Against the background of unsatisfactory results from the international OECD study PISA (Program for International Student Assessment), Germany is facing a period of intense school reforms. Looking back at a tradition of school culture with too few changes during the last century, quick and radical renewal of the school system is rather unlikely. Furthermore students are increasingly turning away from natural sciences [1]. The AWI aims at providing impulses for major changes in the schooling system and is offering solid science education not only for university students but also for a larger audience. All efforts towards this goal are interconnected within the project SEA (Science & Education @ the AWI). With the school-term of 2002/03 the Alfred-Wegener-Institute for Polar and Marine Research started HIGHSEA (High school of SEA). The program is the most important component of SEA. Each year 22 high school students (grade 10 or 11) are admitted to HIGHSEA spending their last three years of school not at school but at the institute. Four subjects (biology as a major, chemistry, math and English as accessory subjects) are combined and taught fully integrated. Students leave their school for two days each week to study, work and explore all necessary topics at the AWI. All of the curricular necessities of the four subjects have been rearranged in their temporal sequencing thus enabling a conceptual formulation of four major questions to be dealt with in the course of the three-year program [2]. Students are taught by teachers of the cooperation schools as well as by scientists of the AWI. Close links and intense cooperation between both groups are the basis of fundamental changes in teaching and learning climate. We are organizing expeditions for every group of HIGHSEA-students (e. g. to the Arctic or to mid-Atlantic seamounts). For each student expedition we devise a "real" research question. Usually a single working group at the AWI has a special interest in the
An attempt to define critical wave and wind scenarios leading to capsize in beam sea
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher; Choi, Ju-hyuck; Kristensen, Hans Otto Holmegaard
2016-01-01
for current new buildings with large superstructures. Thus it seems rea-sonable to investigate the possibility of capsizing in beam sea under the joint action of waves and wind using direct time domain simulations. This has already been done in several studies. Here it is combined with the First Order...
International Nuclear Information System (INIS)
Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J; Kuncic, Zdenka
2015-01-01
Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp–Davis–Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and
Bonding capacity of the GFRP-S on strengthened RC beams after sea water immersion
Sultan, Mufti Amir; Djamaluddin, Rudy
2017-11-01
Construction of concrete structures that located in extreme environments are such as coastal areas will result in decreased strength or even the damage of the structures. As well know, chloride contained in sea water is responsible for strength reduction or structure fail were hence maintenance and repairs on concrete structure urgently needed. One popular method of structural improvements which under investigation is to use the material Glass Fibre Reinforced Polymer which has one of the advantages such as corrosion resistance. This research will be conducted experimental studies to investigate the bonding capacity behavior of reinforced concrete beams with reinforcement GFRP-S immersed in sea water using immersion time of one month, three months, six months and twelve months. Test specimen consists of 12 pieces of reinforced concrete beams with dimensions (150x200x3000) mm that had been reinforced with GFRP-S in the area of bending, the beam without immersion (B0), immersion one month (B1), three months (B3), six months (B6) and twelve months (B12). Test specimen were cured for 28 days before the application of the GFRP sheet. Test specimen B1, B3, B6 and B12 that have been immersed in sea water pool with a immersion time each 1, 3, 6 and 12 months. The test specimen without immersion test by providing a static load until it reaches the failure, to record data during the test strain gauge mounted on the surface of the specimen and the GFRP to collect the strain value. From the research it obvious that there is a decrease bonding capacity on specimens immersed for one month, three months, six months and twelve months against the test object without immersion of 8.85%; 8.89%; 9.33% and 11.04%.
Korchemkina, E. N.; Latushkin, A. A.; Lee, M. E.
2017-11-01
The methods of determination of concentration and scattering by suspended particles in seawater are compared. The methods considered include gravimetric measurements of the mass concentration of suspended matter, empirical and analytical calculations based on measurements of the light beam attenuation coefficient (BAC) in 4 spectral bands, calculation of backscattering by particles using satellite measurements in the visible spectral range. The data were obtained in two cruises of the R/V "Professor Vodyanitsky" in the deep-water part of the Black Sea in July and October 2016., Spatial distribution of scattering by marine particles according to satellite data is in good agreement with the contact measurements.
Development of an optical beam system for deep sea data acquisition
International Nuclear Information System (INIS)
Shibata, Yozo
1994-01-01
Remotely Operated Vehicles (ROV) are an ideal method for acquiring data from instruments located on the seabed. Electrical, acoustic or optical signals can be used to communicate with the data acquisition system. While optical signals have high capacity, the power of the optical beam decreases rapidly with distance in sea water; however, the ROV's ability to approach the instruments eliminates this problem. To investigate a feasibility of an optical beam system for underwater data acquisition, the author has developed and manufactured a prototype data acquisition instrument which the ROV can control. Based on the communication test results, he concludes that such a system is a practical means of short-range underwater data acquisition
Temporal and frequency characteristics of a narrow light beam in sea water.
Luchinin, Alexander G; Kirillin, Mikhail Yu
2016-09-20
The structure of a light field in sea water excited by a unidirectional point-sized pulsed source is studied by Monte Carlo technique. The pulse shape registered at the distances up to 120 m from the source on the beam axis and in its axial region is calculated with a time resolution of 1 ps. It is shown that with the increase of the distance from the source the pulse splits into two parts formed by components of various scattering orders. Frequency and phase responses of the beam are calculated by means of the fast Fourier transform. It is also shown that for higher frequencies, the attenuation of harmonic components of the field is larger. In the range of parameters corresponding to pulse splitting on the beam axis, the attenuation of harmonic components in particular spectral ranges exceeds the attenuation predicted by Bouguer law. In this case, the transverse distribution of the amplitudes of these harmonics is minimal on the beam axis.
Directory of Open Access Journals (Sweden)
Mingyi Wu
2015-04-01
Full Text Available Sulfated fucans, the complex polysaccharides, exhibit various biological activities. Herein, we purified two fucans from the sea cucumbers Holothuria edulis and Ludwigothurea grisea. Their structures were verified by means of HPGPC, FT-IR, GC–MS and NMR. As a result, a novel structural motif for this type of polymers is reported. The fucans have a unique structure composed of a central core of regular (1→2 and (1→3-linked tetrasaccharide repeating units. Approximately 50% of the units from L. grisea (100% for H. edulis fucan contain sides of oligosaccharides formed by nonsulfated fucose units linked to the O-4 position of the central core. Anticoagulant activity assays indicate that the sea cucumber fucans strongly inhibit human blood clotting through the intrinsic pathways of the coagulation cascade. Moreover, the mechanism of anticoagulant action of the fucans is selective inhibition of thrombin activity by heparin cofactor II. The distinctive tetrasaccharide repeating units contribute to the anticoagulant action. Additionally, unlike the fucans from marine alga, although the sea cucumber fucans have great molecular weights and affluent sulfates, they do not induce platelet aggregation. Overall, our results may be helpful in understanding the structure-function relationships of the well-defined polysaccharides from invertebrate as new types of safer anticoagulants.
Gatti, S.
2006-12-01
Against the background of unsatisfactory results from the international OECD study PISA (Program for International Student Assessment), Germany is facing a period of intense school reforms. Looking back at a tradition of school culture with too few changes during the last century, quick and radical renewal of the school system is rather unlikely. Furthermore students are increasingly turning away from natural sciences. The AWI aims at providing impulses for major changes in the schooling system and is offering solid science education not only for university students but also for a much younger audience. All efforts towards this goal are interconnected within the project SEA (Science & Education @ the AWI). Fife years ago the AWI started HIGHSEA (High school of SEA). Each year 22 high school students (grade 11) are admitted to HIGHSEA spending their last three years of school not at school but at the institute. Four subjects (biology as a major, chemistry, math and English as accessory subjects) are combined and taught fully integrated. Students leave their schools for two days each week to study, work and explore all necessary topics at the AWI. All of the curricular necessities of the four subjects are being met. After rearrangement of the temporal sequencing conceptual formulation of four major questions around AWI-topics was possible. Students are taught by teachers of the cooperating schools as well as by scientists of the AWI. Close links and intense cooperation between all three groups are the basis of fundamental changes in teaching and learning climate. For each group of students we organize a short research expedition: in August 2005 we worked in the high Arctic, in January and February 2006 we performed measurements at two eastern Atlantic seamounts. Even if the amount of data coming from these expeditions is comparatively small they still contribute to ongoing research projects of the oceanographic department. The first two groups of students finished
Modeling and analysis of Off-beam lidar returns from thick clouds, snow, and sea ice
International Nuclear Information System (INIS)
Varnai, T.; Cahalan, R. F.
2009-01-01
A group of recently developed lidar (laser ranging and detection) systems can detect signals returning from several wide field-of-views, allowing them to observe the way laser pulses spread in thick media. The new capability enabled accurate measurements of cloud geometrical thickness and promises improved measurements of internal cloud structure as well as snow and sea ice thickness. This paper presents a brief overview of radiation transport simulation techniques and data analysis methods that were developed for multi-view lidar applications and for and considering multiple scattering effects in single-view lidar data. In discussing methods for simulating the three-dimensional spread of lidar pulses, we present initial results from Phase 3 of the Intercomparison of 3-D Radiation Codes (I3RC) project. The results reveal some differences in the capabilities of participating models, while good agreement among several models provides consensus results suitable for testing future models. Detailed numerical results are available at the I3RC web site at http://i3rc.gsfc.nasa. gov. In considering data analysis methods, we focus on the Thickness from Off-beam Returns (THOR) lidar. THOR proved successful in measuring the geometrical thickness of optically thick clouds; here we focus on its potential for retrieving the vertical profile of scattering coefficient in clouds and for measuring snow thickness. Initial observations suggest considerable promise but also reveal some limitations, for example that the maximum retrievable snow thickness drops from about 0.5 m in pristine areas to about 0.15 m in polluted regions. (authors)
Riegels, Niels; Kromann, Mikkel; Karup Pedersen, Jesper; Lindgaard-Jørgensen, Palle; Sokolov, Vadim; Sorokin, Anatoly
2013-04-01
The water resources of the Aral Sea basin are under increasing pressure, particularly from the conflict over whether hydropower or irrigation water use should take priority. The purpose of the BEAM model is to explore the impact of changes to water allocation and investments in water management infrastructure on the overall welfare of the Aral Sea basin. The BEAM model estimates welfare changes associated with changes to how water is allocated between the five countries in the basin (Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan and Uzbekistan; water use in Afghanistan is assumed to be fixed). Water is allocated according to economic optimization criteria; in other words, the BEAM model allocates water across time and space so that the economic welfare associated with water use is maximized. The model is programmed in GAMS. The model addresses the Aral Sea Basin as a whole - that is, the rivers Syr Darya, Amu Darya, Kashkadarya, and Zarafshan, as well as the Aral Sea. The model representation includes water resources, including 14 river sections, 6 terminal lakes, 28 reservoirs and 19 catchment runoff nodes, as well as land resources (i.e., irrigated croplands). The model covers 5 sectors: agriculture (crops: wheat, cotton, alfalfa, rice, fruit, vegetables and others), hydropower, nature, households and industry. The focus of the model is on welfare impacts associated with changes to water use in the agriculture and hydropower sectors. The model aims at addressing the following issues of relevance for economic management of water resources: • Physical efficiency (estimating how investments in irrigation efficiency affect economic welfare). • Economic efficiency (estimating how changes in how water is allocated affect welfare). • Equity (who will gain from changes in allocation of water from one sector to another and who will lose?). Stakeholders in the region have been involved in the development of the model, and about 10 national experts, including
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Sinninghe Damsté, J.S.; Grice, K.; Schouten, S.; Nissenbaum, A.; Charrach, J.
1998-01-01
A series of Miocene/Pliocene halite deposits (with extremely low organic carbon contents) from the Sdom Formation (Dead Sea Basin, Israel) have been studied. Distributions and contents of biomarkers have been determined using GC MS and irm-GCMS analyses, respectively. The hydrocarbon fractions
Electrochemical chloride extraction of a beam polluted by chlorides after 40 years in the sea
BOUTEILLER, Véronique; LAPLAUD, André; MALOULA, Aurélie; MORELLE, René Stéphane; DUCHESNE, Béatrice; MORIN, Mathieu
2006-01-01
A beam element, naturally polluted by chlorides after 40 years of a marine tidal exposure, has been treated by electrochemical chloride extraction. The chloride profiles, before and after treatment, show that free chlorides are extrated with an efficiency of 70 % close to the steel, 50 % in the intermediate cover and only 5 % at the concrete surface. From the electrochemical characterizations (before, after, 1, 2 and 17 months after treatment), the steel potential values can, semehow, indicat...
MacDonald, Ken C.; Fox, Paul J.; Miller, Steve; Carbotte, Suzanne; Edwards, Margo H.; Eisen, Mark; Fornari, Daniel J.; Perram, Laura; Pockalny, Rob; Scheirer, Dan; Tighe, Stacey; Weiland, Charles; Wilson, Doug
1992-12-01
SeaMARC II and Sea Beam bathymetric data are combined to create a chart of the East Pacific Rise (EPR) from 8°N to 18°N reaching at least 1 Ma onto the rise flanks in most places. Based on these data as well as SeaMARC II side scan sonar mosaics we offer the following observations and conclusions. The EPR is segmented by ridge axis discontinuities such that the average segment lengths in the area are 360 km for first-order segments, 140 km for second-order segments, 52 km for third-order segments, and 13 km for fourth-order segments. All three first-order discontinuities are transform faults. Where the rise axis is a bathymetric high, second-order discontinuities are overlapping spreading centers (OSCs), usually with a distinctive 3:1 overlap to offset ratio. The off-axis discordant zones created by the OSCs are V-shaped in plan view indicating along axis migration at rates of 40 100 mm yr-1. The discordant zones consist of discrete abandoned ridge tips and overlap basins within a broad wake of anomalously deep bathymetry and high crustal magnetization. The discordant zones indicate that OSCs have commenced at different times and have migrated in different directions. This rules out any linkage between OSCs and a hot spot reference frame. The spacing of abandoned ridges indicates a recurrence interval for ridge abandonment of 20,000 200,000 yrs for OSCs with an average interval of approximately 100,000 yrs. Where the rise axis is a bathymetric low, the only second-order discontinuity mapped is a right-stepping jog in the axial rift valley. The discordant zone consists of a V-shaped wake of elongated deeps and interlocking ridges, similar to the wakes of second-order discontinuities on slow-spreading ridges. At the second-order segment level, long segments tend to lengthen at the expense of neighboring shorter segments. This can be understood if segments can be approximated by cracks, because the propagation force at a crack tip is directly proportional to crack
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
Kuznetsov, A. N.; Fedorov, Yu A.; Yaroslavtsev, V. M.
2018-01-01
The study of pollutants vertical distribution in seabed sediments is of high interest as they conserve the information on the chronology of pollution level in the past. In the present paper, the results of layer by layer study of Cs-137, Am-241, Pb-210 specific activities as well as concentrations of petroleum components, lead and mercury in 48 sediment cores of the Sea of Azov, the Don River and the Kuban River are examined. In most sediment cores, two peaks of Cs-137 and Am-241 are detected. The upper of them was formed due to the Chernobyl accident in 1986 and the other is related to the global nuclear fallout of 1960s. The specific activity of naturally occurring atmospheric Lead-210 decreases exponentially with the sediment core depth. However, it is influenced by fluvial run-off, coastal erosion, Radium-226 and Radon-222 decay. The data on the radionuclides distribution in the seabed sediments is used to date them. According to the results of dating, most of petroleum components, lead and mercury quantities are concentrated in the upper sediment layer formed in the last 50 to 70 years i.e. in the period of the most important anthropogenic pressure.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
X-ray absorption microtomography (microCT) and small beam diffraction mapping of sea urchin teeth.
Stock, S R; Barss, J; Dahl, T; Veis, A; Almer, J D
2002-07-01
Two noninvasive X-ray techniques, laboratory X-ray absorption microtomography (microCT) and X-ray diffraction mapping, were used to study teeth of the sea urchin Lytechinus variegatus. MicroCT revealed low attenuation regions at near the tooth's stone part and along the carinar process-central prism boundary; this latter observation appears to be novel. The expected variation of Mg fraction x in the mineral phase (calcite, Ca(1-x)Mg(x)CO(3)) cannot account for all of the linear attenuation coefficient decrease in the two zones: this suggested that soft tissue is localized there. Transmission diffraction mapping (synchrotron X-radiation, 80.8 keV, 0.1 x 0.1mm(2) beam area, 0.1mm translation grid, image plate area detector) simultaneously probed variations in 3-D and showed that the crystal elements of the "T"-shaped tooth were very highly aligned. Diffraction patterns from the keel (adaxial web) and from the abaxial flange (containing primary plates and the stone part) differed markedly. The flange contained two populations of identically oriented crystal elements with lattice parameters corresponding to x=0.13 and x=0.32. The keel produced one set of diffraction spots corresponding to the lower x. The compositions were more or less equivalent to those determined by others for camarodont teeth, and the high Mg phase is expected to be disks of secondary mineral epitaxially related to the underlying primary mineral element. Lattice parameter gradients were not noted in the keel or flange. Taken together, the microCT and diffraction results indicated that there was a band of relatively high protein content, of up to approximately 0.25 volume fraction, in the central part of the flange and paralleling its adaxial and abaxial faces. X-ray microCT and microdiffraction data used in conjunction with protein distribution data will be crucial for understanding the properties of various biocomposites and their mechanical functions.
Rijnsdorp, A.D.; Piet, G.J.; Poos, J.J.
2001-01-01
The spawning stock of North Sea cod is at a historic low level and immediate management measures are needed to improve this situation. As a first step, the European Commission in 2001 closed a large area in the North Sea between February 15 and April 30 to all cod related fishing fleets in order to
Moore, R. K.; Fung, A. K.; Dome, G. J.; Birrer, I. J.
1978-01-01
The wind direction properties of radar backscatter from the sea were empirically modelled using a cosine Fourier series through the 4th harmonic in wind direction (referenced to upwind). A comparison with 1975 JONSWAP (Joint North Sea Wave Project) scatterometer data, at incidence angles of 40 and 65, indicates that effects to third and fourth harmonics are negligible. Another important result is that the Fourier coefficients through the second harmonic are related to wind speed by a power law expression. A technique is also proposed to estimate the wind speed and direction over the ocean from two orthogonal scattering measurements. A comparison between two different types of sea scatter theories, one type presented by the work of Wright and the other by that of Chan and Fung, was made with recent scatterometer measurements. It demonstrates that a complete scattering model must include some provisions for the anisotropic characteristics of the sea scatter, and use a sea spectrum which depends upon wind speed.
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
International Nuclear Information System (INIS)
Bhardwaj, Y.K.; Virendra Kumar; Sabharwal, S.
2001-01-01
Graft co polymerisation of Acrylonitrile (AN) onto porous polypropylene (PP) by post irradiation technique using electron beam (EB) irradiation were studied with regards to various parameters of importance like grafting time, dose rate and horizontal stacking. Grafting of AN causes hardening as well as yellow colouration of the white flexible PP substrate sheets. Grafting reaches saturation limits of ∼130 % after grafting for 2-3 hours and no significant increase in grafting extents is seen after this period. Higher grafting levels are achieved at low dose rates. Keeping the ratio of volume of grafting solution and weight of PP substrate constant, simultaneous grafting of 10-12 PP sheets can be carried out by horizontal stacking without much variation in the grafting limits between the stacked sheets. (author)
International Nuclear Information System (INIS)
Stock, S.R.; Barss, J.; Dahl, T.; Veis, A.; Almer, J.D.; De Carlo, F.
2003-01-01
In sea urchin teeth, the keel plays an important structural role, and this paper reports results of microstructural characterization of the keel of Lytechinus variegatus using two noninvasive synchrotron x-ray techniques: x-ray absorption microtomography (microCT) and x-ray diffraction mapping. MicroCT with 14 keV x-rays mapped the spatial distribution of mineral at the 1.3 microm level in a millimeter-sized fragment of a mature portion of the keel. Two rows of low absorption channels (i.e., primary channels) slightly less than 10 microm in diameter were found running linearly from the flange to the base of the keel and parallel to its sides. The primary channels paralleled the oral edge of the keel, and the microCT slices revealed a planar secondary channel leading from each primary channel to the side of the keel. The primary and secondary channels were more or less coplanar and may correspond to the soft tissue between plates of the carinar process. Transmission x-ray diffraction with 80.8 keV x-rays and a 0.1 mm beam mapped the distribution of calcite crystal orientations and the composition Ca(1-x)Mg(x)CO(3) of the calcite. Unlike the variable Mg concentration and highly curved prisms found in the keel of Paracentrotus lividus, a constant Mg content (x = 0.13) and relatively little prism curvature was found in the keel of Lytechinus variegatus.
International Nuclear Information System (INIS)
Herr, W; Pieloni, T
2014-01-01
One of the most severe limitations in high-intensity particle colliders is the beam-beam interaction, i.e. the perturbation of the beams as they cross the opposing beams. This introduction to beam-beam effects concentrates on a description of the phenomena that are present in modern colliding beam facilities
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Currow, David; Watts, Gareth John; Johnson, Miriam; McDonald, Christine F; Miners, John O; Somogyi, Andrew A; Denehy, Linda; McCaffrey, Nicola; Eckert, Danny J; McCloud, Philip; Louw, Sandra; Lam, Lawrence; Greene, Aine; Fazekas, Belinda; Clark, Katherine C; Fong, Kwun; Agar, Meera R; Joshi, Rohit; Kilbreath, Sharon; Ferreira, Diana; Ekström, Magnus
2017-07-17
Chronic breathlessness is highly prevalent and distressing to patients and families. No medication is registered for its symptomatic reduction. The strongest evidence is for regular, low-dose, extended- release (ER) oral morphine. A recent large phase III study suggests the subgroup most likely to benefit have chronic obstructive pulmonary disease (COPD) and modified Medical Research Council breathlessness scores of 3 or 4. This protocol is for an adequately powered, parallel-arm, placebo-controlled, multisite, factorial, block-randomised study evaluating regular ER morphine for chronic breathlessness in people with COPD. The primary question is what effect regular ER morphine has on worst breathlessness, measured daily on a 0-10 numerical rating scale. Uniquely, the coprimary outcome will use a FitBit to measure habitual physical activity. Secondary questions include safety and, whether upward titration after initial benefit delivers greater net symptom reduction. Substudies include longitudinal driving simulation, sleep, caregiver, health economic and pharmacogenetic studies. Seventeen centres will recruit 171 participants from respiratory and palliative care. The study has five phases including three randomisation phases to increasing doses of ER morphine. All participants will receive placebo or active laxatives as appropriate. Appropriate statistical analysis of primary and secondary outcomes will be used. Ethics approval has been obtained. Results of the study will be submitted for publication in peer-reviewed journals, findings presented at relevant conferences and potentially used to inform registration of ER morphine for chronic breathlessness. NCT02720822; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Energy Technology Data Exchange (ETDEWEB)
Wolff, U. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Gewaesserphysik
2000-07-01
The derivation and implementation of an algorithm for edge detection in images and for the detection of the Lipschitz regularity in edge points are described. The method is based on the use of the wavelet transform for edge detection at different resolutions. The Lipschitz regularity is a measure that characterizes the edges. The description of the derivation is first performed in one dimension. The approach of Mallat is formulated consistently and proved. Subsequently, the two-dimensional case is addressed, for which the derivation, as well as the description of the algorithm, is analogous. The algorithm is applied to detect edges in nautical radar images using images collected at the island of Sylt. The edges discernible in the images and the Lipschitz values provide information about the position and nature of spatial variations in the depth of the seafloor. By comparing images from different periods of measurement, temporal changes in the bottom structures can be localized at different resolutions and interpreted. The method is suited to the monitoring of coastal areas. It is an inexpensive way to observe long-term changes in the seafloor character. Thus, the results of this technique may be used by the authorities responsible for coastal protection to decide whether measures should be taken or not. (orig.)
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
International Nuclear Information System (INIS)
1988-01-01
The beam diagnostic components for both the transfer and the high-energy beamlines perform well except for some of the scanners whose noise pick-up has become a problem, especially at low beam intensities. This noise pick-up is primarily due to deterioration of the bearings in the scanner. At some locations in the high-energy beamlines, scanners were replaced by harps as the scanners proved to be practically useless for the low-intensity beams required in the experimental areas. The slits in the low-energy beamline, which are not water-cooled, have to be repaired at regular intervals because of vacuum leaks. Overheating causes the ceramic feedthroughs to deteriorate resulting in the vacuum leaks. Water-cooled slits have been ordered to replace the existing slits which will later be used in the beamlines associated with the second injector cyclotron SPC2. The current-measurement system will be slightly modified and should then be much more reliable. 3 figs
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
1991-01-15
of Oceanography, University of Rhode Island , Narragansett, R.I. 02882, A. Shor and C. Nishimura, Hawaii Institute of Geophysics, University of Hawaii...across the Clipperton and the absence of intra-transform spreading, and opening across the Siqueiros with sustained intra-transform spreading. An...Ma. Future work will focus on the significant task of combining this survey with three 1987 SeaMARC II surveys of the Clipperton transform, the 9°N
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
International Nuclear Information System (INIS)
Teng, L.C.
1980-01-01
In colliding beam storage rings the beam collision regions are generally so short that the beam-beam interaction can be considered as a series of evenly spaced non-linear kicks superimposed on otherwise stable linear oscillations. Most of the numerical studies on computers were carried out in just this manner. But for some reason this model has not been extensively employed in analytical studies. This is perhaps because all analytical work has so far been done by mathematicians pursuing general transcendental features of non-linear mechanics for whom this specific model of the specific system of colliding beams is too parochial and too repugnantly physical. Be that as it may, this model is of direct interest to accelerator physicists and is amenable to (1) further simplification, (2) physical approximation, and (3) solution by analogy to known phenomena
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
International Nuclear Information System (INIS)
Chao, A.W.
1992-01-01
There are two physical pictures that describe the beam-beam interaction in a storage ring collider: The weak-strong and the strong-strong pictures. Both pictures play a role in determining the beam-beam behavior. This review addresses only the strong-strong picture. The corresponding beam dynamical effects are referred to as the coherent beam-beam effects. Some basic knowledge of the weak-strong picture is assumed. To be specific, two beams of opposite charges are considered. (orig.)
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.; Kuczewski A.; Altinbas, Z.; Beavis, D.; Belomestnykh,; Dai, J. et al
2012-07-01
The Collider-Accelerator Department at Brookhaven National Laboratory is building a high-brightness 500 mA capable Energy Recovery Linac (ERL) as one of its main R&D thrusts towards eRHIC, the polarized electron - hadron collider as an upgrade of the operating RHIC facility. The ERL is in final assembly stages, with injection commisioning starting in October 2012. The objective of this ERL is to serve as a platform for R&D into high current ERL, in particular issues of halo generation and control, Higher-Order Mode (HOM) issues, coherent emissions for the beam and high-brightness, high-power beam generation and preservation. The R&D ERL features a superconducting laser-photocathode RF gun with a high quantum efficiency photoccathode served with a load-lock cathode delivery system, a highly damped 5-cell accelerating cavity, a highly flexible single-pass loop and a comprehensive system of beam instrumentation. In this ICFA Beam Dynamics Newsletter article we will describe the ERL in a degree of detail that is not usually found in regular publications. We will discuss the various systems of the ERL, following the electrons from the photocathode to the beam dump, cover the control system, machine protection etc and summarize with the status of the ERL systems.
Boussard, Daniel
1987-01-01
We begin by giving a description of the radio-frequency generator-cavity-beam coupled system in terms of basic quantities. Taking beam loading and cavity detuning into account, expressions for the cavity impedance as seen by the generator and as seen by the beam are derived. Subsequently methods of beam-loading compensation by cavity detuning, radio-frequency feedback and feedforward are described. Examples of digital radio-frequency phase and amplitude control for the special case of superco...
International Nuclear Information System (INIS)
Pendelbury, J.M.; Smith, K.F.
1987-01-01
Studies with directed collision-free beams of particles continue to play an important role in the development of modern physics and chemistry. The deflections suffered by such beams as they pass through electric and magnetic fields or laser radiation provide some of the most direct information about the individual constituents of the beam; the scattering observed when two beams intersect yields important data about the intermolecular forces responsible for the scattering. (author)
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
On beam quality and flatness of radiotherapy megavoltage photon beams
International Nuclear Information System (INIS)
Hossain, Murshed; Rhoades, Jeffrey
2016-01-01
Ratio of percentage depth dose (PDD) at two depths, PDD at a depth of 10 cm (PDD 10 ), and beam flatness are monitored regularly for radiotherapy beams for quality assurance. The purpose of this study is to understand the effects of changes in one of these parameters on the other. Is it possible to monitor only the beam flatness and not PDD? The investigation has two components. Naturally occurring i.e., unintended changes in PDD ratio and in-plane flatness for 6 and 10 MV photon beams for one particular Siemens Artiste Linac are monitored for a period of about 4 years. Secondly, deliberate changes in the beam parameters are induced by changing the bending magnet current (BMI). Relationships between various beam parameters for unintended changes as well as deliberate changes are characterized. Long term unintentional changes of PDD ratio are found to have no systematic trend. The flatness in the in plane direction for 6 and 10 MV beams show slow increase of 0.43 and 0.75 % respectively in about 4 years while the changes in the PDD ratio show no such trend. Over 10 % changes in BMI are required to induce changes in the beam quality indices at 2 % level. PDD ratio for the 10 MV beam is found to be less sensitive, while the depth of maximum dose, d max , is more sensitive to the changes in BMI compared to the 6 MV beam. Tolerances are more stringent for PDD 10 than PDD ratio for the 10 MV beam. PDD ratio, PDD 10 , and flatness must be monitored independently. Furthermore, off axis ratio alone cannot be used to monitor flatness. The effect of beam quality change in the absolute dose is clinically insignificant.
National Research Council Canada - National Science Library
1997-01-01
.... In preparation for these changes, the Navy is exploring new command and control relationships, and the Marine Corps established Sea Dragon to experiment with emerging technologies, operational...
International Nuclear Information System (INIS)
Bogaty, J.; Clifft, B.E.; Zinkann, G.P.; Pardo, R.C.
1995-01-01
The ECR-PII injector beam line is operated at a fixed ion velocity. The platform high voltage is chosen so that all ions have a velocity of 0.0085c at the PII entrance. If a previous tune configuration for the linac is to be used, the beam arrival time must be matched to the previous tune as well. A nondestructive beam-phase pickup detector was developed and installed at the entrance to the PII linac. This device provides continuous phase and beam current information and allows quick optimization of the beam injected into PII. Bunches traverse a short tubular electrode thereby inducing displacement currents. These currents are brought outside the vacuum interface where a lumped inductance resonates electrode capacitance at one of the bunching harmonic frequencies. This configuration yields a basic sensitivity of a few hundred millivolts signal per microampere of beam current. Beam-induced radiofrequency signals are summed against an offset frequency generated by our master oscillator. The resulting kilohertz difference frequency conveys beam intensity and bunch phase information which is sent to separate processing channels. One channel utilizes a phase locked loop which stabilizes phase readings if beam is unstable. The other channel uses a linear full wave active rectifier circuit which converts kilohertz sine wave signal amplitude to a D.C. voltage representing beam current. A prototype set of electronics is now in use with the detector and we began to use the system in operation to set the arrival beam phase. A permanent version of the electronics system for the phase detector is now under construction. Additional nondestructive beam intensity and phase monitors at the open-quotes Boosterclose quotes and open-quotes ATLASclose quotes linac sections are planned as well as on some of the high-energy beam lines. Such a monitor will be particularly useful for FMA experiments where the primary beam hits one of the electric deflector plates
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Gamp, Alexander
2013-01-01
We begin by giving a description of the radio-frequency generator-cavity-beam coupled system in terms of basic quantities. Taking beam loading and cavity detuning into account, expressions for the cavity impedance as seen by the generator and as seen by the beam are derived. Subsequently methods of beam-loading compensation by cavity detuning, radio-frequency feedback and feedforward are described. Examples of digital radio-frequency phase and amplitude control for the special case of superconducting cavities are also given. Finally, a dedicated phase loop for damping synchrotron oscillations is discussed.
International Nuclear Information System (INIS)
Gamp, Alexander
2013-01-01
We begin by giving a description of the radio-frequency generator-cavity-beam coupled system in terms of basic quantities. Taking beam loading and cavity detuning into account, expressions for the cavity impedance as seen by the generator and as seen by the beam are derived. Subsequently methods of beam-loading compensation by cavity detuning, radio-frequency feedback and feedforward are described. Examples of digital radio-frequency phase and amplitude control for the special case of superconducting cavities are also given. Finally, a dedicated phase loop for damping synchrotron oscillations is discussed. (author)
THREE-BEAM INSTABILITY IN THE LHC*
Burov, A
2013-01-01
In the LHC, a transverse instability is regularly observed at 4TeV right after the beta-squeeze, when the beams are separated by about their ten transverse rms sizes [1-3], and only one of the two beams is seen as oscillating. So far only a single hypothesis is consistent with all the observations and basic concepts, one about a third beam - an electron cloud, generated by the two proton beams in the high-beta areas of the interaction regions. The instability results from a combined action of the cloud nonlinear focusing and impedance.
von Hillebrandt-Andrade, C.; Crespo Jones, H.
2012-12-01
requirements and factors have been considered for the sustainability of the stations. The sea level stations have to potentially sustain very aggressive conditions of not only tsunamis, but on a more regular basis, hurricanes. Given the requirement that the data be available in near real time, for tsunami and other coastal hazard application, robust communication systems are also essential. For the local operator, the ability to be able to visualize the data is critical and tools like the IOC Sea level Monitoring Facility and the Tide Tool program are very useful. It has also been emphasized the need for these stations to serve multiple purposes. For climate and other research applications the data need to be archived, QC'd and analyzed. Increasing the user base for the sea level data has also been seen as an important goal to gain the local buy in; local weather and meteorological offices are considered as key stakeholders but for whom applications still need to be developed. The CARIBE EWS continues to look forward to working with other IOC partners including the Global Sea Level Observing System (GLOSS) and Sub-Commission for the Caribbean and Adjacent Regions (IOCARIBE)/GOOS, as well as with local, national and global sea level station operators and agencies for the development of a sustainable sea level network.
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
CSIR Research Space (South Africa)
Ngcobo, S
2011-11-01
Full Text Available The transformation of a Gaussian beam (GB) into a symmetrical higher order TEMp0 Laguerre Gaussian beam (LGB) intensity distribution of which is further rectified and transformed into a Gaussian intensity distribution in the plane of a converging...
Macdonald, Kenneth C.
Forty-foot, storm-swept seas, Spitzbergen polar bears roaming vast expanses of Arctic ice, furtive exchanges of forbidden manuscripts in Cold War Moscow, the New York city fashion scene, diving in mini-subs to the sea floor hot srings, life with the astronauts, romance and heartbreak, and invading the last bastions of male exclusivity: all are present in this fast-moving, non-fiction account of one woman' fascinating adventures in the world of marine geology and oceanography.
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
International Nuclear Information System (INIS)
Chao, A.W.; Keil, E.
1979-06-01
The stability of the coherent beam-beam effect between rigid bunches is studied analytically and numerically for a linear force by evaluating eigenvalues. For a realistic force, the stability is investigated by following the bunches for many revolutions. 4 refs., 13 figs., 2 tabs
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
International Nuclear Information System (INIS)
Opanasenko, A.V.; Romanyuk, L.I.
1992-01-01
A beam-plasma interaction at the entrance of the symmetrically open plasma system with an electron beam injected through it is investigated. An ignition of the plasma-beam discharge on waves of upper hybrid dispersion branch of a magnetoactive plasma is found in the plasma penetrating into the vacuum contrary to the beam. It is shown that the beam-plasma discharge is localized in the inhomogeneous penetrating plasma in the zone where only these waves exist. Regularities of the beam-plasma discharge ignition and manifestation are described. It is determined that the electron beam crossing the discharge zone leads to the strong energy relaxation of the beam. It is shown possible to control the beam-plasma discharge ignition by changing the potential of the electron beam collector. (author)
Synthetic methods for beam to beam power balancing capability of large laser facilities
International Nuclear Information System (INIS)
Chen Guangyu; Zhang Xiaomin; Zhao Runchang; Zheng Wanguo; Yang Xiaoyu; You Yong; Wang Chengcheng; Shao Yunfei
2011-01-01
To account for output power balancing capability of large laser facilities, a synthetic method with beam to beam root-mean-square is presented. Firstly, a conversion process for the facilities from original data of beam powers to regular data is given. The regular data contribute to the normal distribution approximately, and then a corresponding simple method of root-mean-square for beam to beam power balancing capability is given.Secondly, based on theory of total control charts and cause-selecting control charts, control charts with root-mean-square are established which show short-term variety of power balancing capability of the facilities. Mean rate of failure occurrence is also defined and used to describe long-term trend of global balancing capabilities of the facilities. Finally, advantages of the intuitive and efficient diagnosis for synthetic methods are illustrated by analysis of experimental data. (authors)
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
SeaWiFS Third Anniversary Global Biosphere
2002-01-01
September 18,2000 is the third anniversary of the start of regular SeaWiFS operations of this remarkable planet called Earth. This SeaWiFS image is of the Global Biosphere depicting the ocean's long-term average phytoplankton chlorophyll concentration acquired between September 1997 and August 2000 combined with the SeaWiFS-derived Normalized Difference Vegetation Index (NDVI) over land during July 2000.
International Nuclear Information System (INIS)
Sessler, A.M.; Hopkins, D.B.
1986-06-01
The Two-Beam Accelerator (TBA) consists of a long high-gradient accelerator structure (HGS) adjacent to an equal-length Free Electron Laser (FEL). In the FEL, a beam propagates through a long series of undulators. At regular intervals, waveguides couple microwave power out of the FEL into the HGS. To replenish energy given up by the FEL beam to the microwave field, induction accelerator units are placed periodically along the length of the FEL. In this manner it is expected to achieve gradients of more than 250 MV/m and thus have a serious option for a 1 TeV x 1 TeV linear collider. The state of present theoretical understanding of the TBA is presented with particular emphasis upon operation of the ''steady-state'' FEL, phase and amplitude control of the rf wave, and suppression of sideband instabilities. Experimental work has focused upon the development of a suitable HGS and the testing of this structure using the Electron Laser Facility (ELF). Description is given of a first test at ELF with a seven-cell 2π/3 mode structure which without preconditioning and with a not-very-good vacuum nevertheless at 35 GHz yielded an average accelerating gradient of 180 MV/m
Sea level trends in South East Asian Seas (SEAS)
Strassburg, M. W.; Hamlington, B. D.; Leben, R. R.; Manurung, P.; Lumban Gaol, J.; Nababan, B.; Vignudelli, S.; Kim, K.-Y.
2014-10-01
Southeast Asian Seas (SEAS) span the largest archipelago in the global ocean and provide a complex oceanic pathway connecting the Pacific and Indian Oceans. The SEAS regional sea level trends are some of the highest observed in the modern satellite altimeter record that now spans almost two decades. Initial comparisons of global sea level reconstructions find that 17 year sea level trends over the past 60 years exhibit good agreement in areas and at times of strong signal to noise associated decadal variability forced by low frequency variations in Pacific trade winds. The SEAS region exhibits sea level trends that vary dramatically over the studied time period. This historical variation suggests that the strong regional sea level trends observed during the modern satellite altimeter record will abate as trade winds fluctuate on decadal and longer time scales. Furthermore, after removing the contribution of the Pacific Decadal Oscillation (PDO) to sea level trends in the past twenty years, the rate of sea level rise is greatly reduced in the SEAS region. As a result of the influence of the PDO, the SEAS regional sea level trends during 2010s and 2020s are likely to be less than the global mean sea level (GMSL) trend if the observed oscillations in wind forcing and sea level persist. Nevertheless, long-term sea level trends in the SEAS will continue to be affected by GMSL rise occurring now and in the future.
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Studies on regularities of metal ion sorption from seawater by clinoptilolytes of different origin
International Nuclear Information System (INIS)
Khamizov, R.Kh.; Butenko, T.Yu.; Bronov, L.V.; Skovyra, V.V.; Novikova, V.A.; AN SSSR, Vladivostok
1988-01-01
The regularities of metal ion sorption from sea water by different clinoptilolyte (CP) samples are studied with the purpose of choosing the most prospective sorbents to extract strontium and rubidium. It is shown that the internal diffusion is the sorption rate determining stage. The dependence of effective coefficients of internal diffusion on the exchange level is determined. The distribution coefficients and those of single metal ion separation are determined, the series of sorption selectivity are determined. All CP studied can be used for initial Rb concentration from sea water, and to extract strontium it is advisable to use zeolites of Dzegvi and Tedzami deposits
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
International Nuclear Information System (INIS)
1988-01-01
Considerable experience has now been gained with the various beam transport lines, and a number of minor changes have been made to improve the ease of operation. These include: replacement of certain little-used slits by profile monitors (harps or scanners); relocation of steering magnets, closer to diagnostic harps or profile scanners; installation of a scanner inside the isocentric neutron therapy system; and conversion of a 2-doublet quadrupole telescope (on the neutron therapy beamline) to a 2-triplet telescope. The beam-swinger project has been delayed by very late delivery of the magnet iron to the manufacturer, but is now progressing smoothly. The K=600 spectrometer magnets have now been delivered and are being assembled for field mapping. The x,y-table with its associated mapping equipment is complete, together with the driver software. One of the experimental areas has been dedicated to the production of collimated neutron beams and has been equipped with a bending magnet and beam dump, together with steel collimators fixed at 4 degrees intervals from 0 degrees to 16 degrees. Changes to the target cooling and shielding system for isotope production have led to a request for much smaller beam spot sizes on target, and preparations have been made for rearrangement of the isotope beamline to permit installation of quadrupole triplets on the three beamlines after the switching magnet. A practical system of quadrupoles for matching beam properties to the spectrometer has been designed. 6 figs
Beam-beam interaction and Pacman effects in the SSC with random nonlinear multipoles
International Nuclear Information System (INIS)
Goderre, G.P.; Ohnuma, S.
1988-01-01
In order to find the combined effects of beam-beam interaction (head-on and long-range) and random nonlinear multipoles in dipole magnets, transverse tunes and smears have been calculated as a function of oscillation amplitudes. Two types of particles, ''regular'' and ''Pacman,'' have been investigated using a modified version of tracking code TEAPOT. Regular particles experience beam-beam interactions in all four interaction regions (IR's), both head-on and long range, while pacman particles interact with bunches of the other beam in one medium-beta and one low-beta IR's only. The model for the beam-beam interaction is of weak-strong type and the strong beam is assumed to have a round Gaussian charge distribution. Furthermore, it is assumed that the vertical closed orbit deviation arising from the finite crossing angle of 70 μrad is perfectly compensated for regular particles. The same compensation applied to pacman particles creates a closed orbit distortion. Linear tunes are adjusted for regular particles to the design values but there are no nonlinear corrections except for chromaticity correcting sextupoles in two families. Results obtained in this study do not show any reduction of dynamic or linear aperture for pacman particles but some doubts exist regarding the validity of defining the linear aperture from the smear alone. Preliminary results are given for regular particles when (Δp/p) is modulated by the synchrotron oscillation. For these, fifty oscillations corresponding to 26,350 revolutions have been tracked. A very slow increase in the horizontal amplitude, /approximately/4 /times/ 10/sup /minus/4//oscillation (relative), is a possibility but this should be confirmed by trackings of larger number of revolutions. 11 refs., 18 figs., 2 tabs
Perovich, D.; Gerland, S.; Hendricks, S.; Meier, Walter N.; Nicolaus, M.; Richter-Menge, J.; Tschudi, M.
2013-01-01
During 2013, Arctic sea ice extent remained well below normal, but the September 2013 minimum extent was substantially higher than the record-breaking minimum in 2012. Nonetheless, the minimum was still much lower than normal and the long-term trend Arctic September extent is -13.7 per decade relative to the 1981-2010 average. The less extreme conditions this year compared to 2012 were due to cooler temperatures and wind patterns that favored retention of ice through the summer. Sea ice thickness and volume remained near record-low levels, though indications are of slightly thicker ice compared to the record low of 2012.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Digital Repository Service at National Institute of Oceanography (India)
Church, J.A.; Clark, P.U.; Cazenave, A.; Gregory, J.M.; Jevrejeva, S.; Levermann, A.; Merrifield, M.A.; Milne, G.A.; Nerem, R.S.; Nunn, P.D.; Payne, A.J.; Pfeffer, W.T.; Stammer, D.; Unnikrishnan, A.S.
This chapter considers changes in global mean sea level, regional sea level, sea level extremes, and waves. Confidence in projections of global mean sea level rise has increased since the Fourth Assessment Report (AR4) because of the improved...
International Nuclear Information System (INIS)
Uesaka, Mitsuru
2003-01-01
Present state and future prospect are described on quantum beams for medical use. Efforts for compactness of linac for advanced cancer therapy have brought about the production of machines like Accuray's CyberKnife and TOMOTHERAPY (Tomo Therapy Inc.) where the acceleration frequency of X-band (9-11 GHz) is used. For cervical vein angiography by the X-band linac, a compact hard X-ray source is developed which is based on the (reverse) Compton scattering through laser-electron collision. More intense beam and laser are necessary at present. A compact machine generating the particle beam of 10 MeV-1 GeV (laser-plasma accelerator) for cancer therapy is also developed using the recent compression technique (chirped-pulse amplification) to generate laser of >10 TW. Tokyo University is studying for the electron beam with energy of GeV order, for the laser-based synchrotron X-ray, and for imaging by the short pulse ion beam. Development of advanced compact accelerators is globally attempted. In Japan, a virtual laboratory by National Institute of Radiological Sciences (NIRS), a working group of universities and research facilities through the Ministry of Education, Culture, Sports, Science and Technology, started in 2001 for practical manufacturing of the above-mentioned machines for cancer therapy and for angiography. Virtual Factory (Inc.), a business venture, is to be stood in future. (N.I.)
The Problems of Novice Classroom Teachers having Regular and Alternative Certificates
Taneri, Pervin Oya; Ok, Ahmet
2014-01-01
The purposes of this study are to understand the problems of classroom teachers in their first three years of teaching, and to scrutinize whether these problems differ according to having regular or alternative teacher certification. The sample of this study was 275 Classroom Teachers in the Public Elementary Schools in districts of Ordu, Samsun, and Sinop in the Black Sea region. The data gathered through the questionnaire were subject to descriptive and inferential statistical analysis. Res...
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
Importance of beam-beam tune spread to collective beam-beam instability in hadron colliders
International Nuclear Information System (INIS)
Jin Lihui; Shi Jicong
2004-01-01
In hadron colliders, electron-beam compensation of beam-beam tune spread has been explored for a reduction of beam-beam effects. In this paper, effects of the tune-spread compensation on beam-beam instabilities were studied with a self-consistent beam-beam simulation in model lattices of Tevatron and Large Hodron Collider. It was found that the reduction of the tune spread with the electron-beam compensation could induce a coherent beam-beam instability. The merit of the compensation with different degrees of tune-spread reduction was evaluated based on beam-size growth. When two beams have a same betatron tune, the compensation could do more harm than good to the beams when only beam-beam effects are considered. If a tune split between two beams is large enough, the compensation with a small reduction of the tune spread could benefit beams as Landau damping suppresses the coherent beam-beam instability. The result indicates that nonlinear (nonintegrable) beam-beam effects could dominate beam dynamics and a reduction of beam-beam tune spread by introducing additional beam-beam interactions and reducing Landau damping may not improve the stability of beams
International Nuclear Information System (INIS)
Abell, D; Adelmann, A; Amundson, J; Dragt, A; Mottershead, C; Neri, F; Pogorelov, I; Qiang, J; Ryne, R; Shalf, J; Siegerist, C; Spentzouris, P; Stern, E; Venturini, M; Walstrom, P
2006-01-01
We describe some of the accomplishments of the Beam Dynamics portion of the SciDAC Accelerator Science and Technology project. During the course of the project, our beam dynamics software has evolved from the era of different codes for each physical effect to the era of hybrid codes combining start-of-the-art implementations for multiple physical effects to the beginning of the era of true multi-physics frameworks. We describe some of the infrastructure that has been developed over the course of the project and advanced features of the most recent developments, the interplay betwen beam studies and simulations and applications to current machines at Fermilab. Finally we discuss current and future plans for simulations of the International Linear Collider
Academic Training Lecture Regular Programme: Particle Therapy
2012-01-01
Particle Therapy using Proton and Ion Beams - From Basic Principles to Daily Operations and Future Concepts by Andreas Peter (Head of Accelerator Operations, Heidelberg Ion Beam Theraps Centre (HIT), Germany) Part I: Tuesday, September 11, 2012 from 11:00 to 12:00 (Europe/Zurich) at CERN ( 222-R-001 - Filtration Plant ) • An introduction about the historical developments of accelerators and their use for medical applications: tumour treatment from X-rays to particle therapy • Description of the underlying physics and biology of particle therapy; implications on the requirements for the needed beam parameters (energy, intensity, focus, beam structure) • Accelerator technology used for particle therapy so far: cyclotrons and synchrotrons • Particle therapy facilities worldwide: an overview and some examples in detail: PSI/Switzerland, Loma Linda/USA, HIMAC/Japan, HIT/Heidelberg, CNAO/Italy Part II: Wednesday, September 12, 2012 from 11:00 to 12:00 (Europe/Zurich) at CER...
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
Sea level trends in Southeast Asian seas
Strassburg, M. W.; Hamlington, B. D.; Leben, R. R.; Manurung, P.; Lumban Gaol, J.; Nababan, B.; Vignudelli, S.; Kim, K.-Y.
2015-05-01
Southeast Asian seas span the largest archipelago in the global ocean and provide a complex oceanic pathway connecting the Pacific and Indian oceans. The Southeast Asian sea regional sea level trends are some of the highest observed in the modern satellite altimeter record that now spans almost 2 decades. Initial comparisons of global sea level reconstructions find that 17-year sea level trends over the past 60 years exhibit good agreement with decadal variability associated with the Pacific Decadal Oscillation and related fluctuations of trade winds in the region. The Southeast Asian sea region exhibits sea level trends that vary dramatically over the studied time period. This historical variation suggests that the strong regional sea level trends observed during the modern satellite altimeter record will abate as trade winds fluctuate on decadal and longer timescales. Furthermore, after removing the contribution of the Pacific Decadal Oscillation (PDO) to sea level trends in the past 20 years, the rate of sea level rise is greatly reduced in the Southeast Asian sea region. As a result of the influence of the PDO, the Southeast Asian sea regional sea level trends during the 2010s and 2020s are likely to be less than the global mean sea level (GMSL) trend if the observed oscillations in wind forcing and sea level persist. Nevertheless, long-term sea level trends in the Southeast Asian seas will continue to be affected by GMSL rise occurring now and in the future.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
A Magnetic Transport Middle Eastern Positron Beam
International Nuclear Information System (INIS)
Al-Qaradawi, I.Y.; Britton, D.T.; Rajaraman, R.; Abdulmalik, D.
2008-01-01
A magnetically guided slow positron beam is being constructed at Qatar University and is currently being optimised for regular operation. This is the first positron beam in the Middle East, as well as being the first Arabic positron beam. Novel features in the design include a purely magnetic in-line deflector, working in the solenoid guiding field, to eliminate un-moderated positrons and block the direct line of sight to the source. The impact of this all-magnetic transport on the Larmor radius and resultant beam characteristics are studied by SIMION simulations for both ideal and real life magnetic field variations. These results are discussed in light of the coupled effect arising from electrostatic beam extraction
First beam splashes at the LHC
CERN Bulletin
2015-01-01
After a two-year shutdown, the first beams of Run 2 circulated in the LHC last Sunday. On Tuesday, the LHC operators performed dedicated runs to allow some of the experiments to record their first signals coming from particles splashed out when the circulating beams hit the collimators. Powerful reconstruction software then transforms the electronic signals into colourful images. “Splash” events are used by the experiments to test their numerous subdetectors and to synchronise them with the LHC clock. These events are recorded when the path of particles travelling in the LHC vacuum pipe is intentionally obstructed using collimators – one-metre-long graphite or tungsten jaws that are also used to catch particles that wander too far from the beam centre and to protect the accelerator against unavoidable regular and irregular beam losses. The particles sprayed out of the collision between the beam and the collimators are mostly muons. ATLAS and CMS&...
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Synchronisation phenomenon in three blades rotor driven by regular or chaotic oscillations
Directory of Open Access Journals (Sweden)
Szmit Zofia
2018-01-01
Full Text Available The goal of the paper is to analysed the influence of the different types of excitation on the synchronisation phenomenon in case of the rotating system composed of a rigid hub and three flexible composite beams. In the model is assumed that two blades, due to structural differences, are de-tuned. Numerical calculation are divided on two parts, firstly the rotating system is exited by a torque given by regular harmonic function, than in the second part the torque is produced by chaotic Duffing oscillator. The synchronisation phenomenon between the beams is analysed both either for regular or chaotic motions. Partial differential equations of motion are solved numerically and resonance curves, time series and Poincaré maps are presented for selected excitation torques.
Galvis-Sánchez, Andrea C.; Lopes, João Almeida; Delgadillo, Ivone; Rangel, António O. S. S.
2013-01-01
The geographical indication (GI) status links a product with the territory and with the biodiversity involved. Besides, the specific knowledge and cultural practices of a human group that permit transforming a resource into a useful good is protected under a GI designation. Traditional sea salt is a hand-harvested product originating exclusively from salt marshes from specific geographical regions. Once salt is harvested, no washing, artificial drying or addition of anti-caking agents are all...
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Collecting marine litter during regular fish surveys
Sluis, van der M.T.; Hal, van R.
2014-01-01
This report presents the results of the marine litter monitoring on the IBTS survey of 2014 and the BTS survey of 2013. Since 2013 marine litter is collected during the International Bottom Trawl Survey (IBTS) and Dutch Beam Trawl Survey (BTS) following a protocol developed by ICES. The composition
DEFF Research Database (Denmark)
Tamulevičius, S.; Jurkevičiute, A.; Armakavičius, N.
2017-01-01
In this paper we describe fabrication and characterization methods of two-dimensional periodic microstructures in photoresist with pitch of 1.2 urn and lattice constant 1.2-4.8 μm, formed using two-beam multiple exposure holographic lithography technique. The regular structures were recorded empl...
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Beam divergence scaling in neutral beam injectors
International Nuclear Information System (INIS)
Holmes, A.J.T.
1976-01-01
One of the main considerations in the design of neutral beam injectors is to monimize the divergence of the primary ion beam and hence maximize the beam transport and minimize the input of thermal gas. Experimental measurements of the divergence of a cylindrical ion beam are presented and these measurements are used to analyze the major components of ion beam divergence, namely: space charge expansion, gas-ion scattering, emittance and optical aberrations. The implication of these divergence components in the design of a neutral beam injector system is discussed and a method of maximizing the beam current is described for a given area of source plasma
Dynamical chaos and beam-beam models
International Nuclear Information System (INIS)
Izrailev, F.M.
1990-01-01
Some aspects of the nonlinear dynamics of beam-beam interaction for simple one-dimensional and two-dimensional models of round and flat beams are discussed. The main attention is paid to the stochasticity threshold due to the overlapping of nonlinear resonances. The peculiarities of a round beam are investigated in view of using the round beams in storage rings to get high luminosity. 16 refs.; 7 figs
Beam-beam interaction and pacman effects in the SSC with momentum oscillation
International Nuclear Information System (INIS)
Mahale, N.K.; Ohnuma, S.
1989-01-01
In order to find the combined effects of beam-beam interaction (head-on and long-range) and random nonlinear multipoles in dipole magnets, the transverse oscillations of ''regular'' as well as ''pacman'' particles are traced for 256 synchrotron oscillation periods (corresponding to 135K revolutions) in the proposed SSC. Results obtained in this study do not show any obvious reduction of dynamic or linear apertures for pacman particles when compared with regular particles for (Δp/p) = 0. There are some indications of possible sudden or gradual increases in the oscillation amplitude, for pacman as well as regular particles, when the amplitude of momentum oscillation is as large as 3σ. 4 refs., 7 figs
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
International Nuclear Information System (INIS)
Hermansson, B.R.
1989-01-01
The main part of this thesis consists of 15 published papers, in which the numerical Beam Propagating Method (BPM) is investigated, verified and used in a number of applications. In the introduction a derivation of the nonlinear Schroedinger equation is presented to connect the beginning of the soliton papers with Maxwell's equations including a nonlinear polarization. This thesis focuses on the wide use of the BPM for numerical simulations of propagating light and particle beams through different types of structures such as waveguides, fibers, tapers, Y-junctions, laser arrays and crystalline solids. We verify the BPM in the above listed problems against other numerical methods for example the Finite-element Method, perturbation methods and Runge-Kutta integration. Further, the BPM is shown to be a simple and effective way to numerically set up the Green's function in matrix form for periodic structures. The Green's function matrix can then be diagonalized with matrix methods yielding the eigensolutions of the structure. The BPM inherent transverse periodicity can be untied, if desired, by for example including an absorptive refractive index at the computational window edges. The interaction of two first-order soliton pulses is strongly dependent on the phase relationship between the individual solitons. When optical phase shift keying is used in coherent one-carrier wavelength communication, the fiber attenuation will suppress or delay the nonlinear instability. (orig.)
2015-01-01
Stable beams: two simple words that carry so much meaning at CERN. When LHC page one switched from "squeeze" to "stable beams" at 10.40 a.m. on Wednesday, 3 June, it triggered scenes of jubilation in control rooms around the CERN sites, as the LHC experiments started to record physics data for the first time in 27 months. This is what CERN is here for, and it’s great to be back in business after such a long period of preparation for the next stage in the LHC adventure. I’ve said it before, but I’ll say it again. This was a great achievement, and testimony to the hard and dedicated work of so many people in the global CERN community. I could start to list the teams that have contributed, but that would be a mistake. Instead, I’d simply like to say that an achievement as impressive as running the LHC – a machine of superlatives in every respect – takes the combined effort and enthusiasm of everyone ...
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
Dose domain regularization of MLC leaf patterns for highly complex IMRT plans
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)
2015-04-15
Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and
Dukhovniy, Viktor; Stulina, Galina; Eshchanov, Odylbek
2013-04-01
organized in 2005-2009 sixth expeditions for complex remote sensing and ground investigations Aral Sea former bottom that were complemented in 2010 -2011 by two expeditions with GFZ. As a result, the landscape, soils and environment mapping was done with determination of ecologically unstable zones and assessment total change of lands situation compared with the pre-independence time. Moreover - methodic of monitoring water, environment and hydro geological indicators on the all deltas area was elaborated, organized its testing and combined with remote sensing data on Amudarya delta for 2009-2012. It permits to SIC ICWC to organize systematic permanent (decadal) monitoring and recording of size, volume and level of water in Aral Sea. Since the beginning of regular observations over the Aral Sea level, 2 periods can be emphasized: 1. Conditionally natural period - 1911-1960 - characterized by a relatively stable hydrological regime, with fluctuations in the level around 53 m and the range of inter-annual fluctuations at no more than 1 m., when the sea received annually about a half of the run-off in the Syrdarya and Amudarya Rivers, i.e. 50-60 km3/yr. 2. Intensive anthropogenic impact period - since the 1960s, a vast extension of irrigable land was carried out in Central Asia that resulted in intensive diversion of river run-off. Since then, the sea level has been falling steadily, causing a dramatic reduction in the water surface area, a decrease in water volume and depths, great changes in shoreline configuration and an expansion of the desert areas adjacent to the Aral Sea. From 1960-1985, when the sea was an integral water body, slight lowering in the sea level took place until the 1970s, when the sea-level decreased with the mean level lowering 1 m. The desiccation process accelerated visibly from the mid 1970s. In 1975-1980, the level decreased by 0.65 m a year on average. Moreover, the level dropped greatly, when the run-off of the Amudarya did not reach the Aral Sea
Energy Technology Data Exchange (ETDEWEB)
Gelbart, W.; Johnson, R. R.; Abeysekera, B. [ASD Inc. Garden Bay, BC (Canada); Best Theratronics Ltd Ottawa Ontario (Canada); PharmaSpect Ltd., Burnaby BC (Canada)
2012-12-19
An inexpensive beam profile monitor is based on the well proven rotating wire method. The monitor can display beam position and shape in real time for particle beams of most energies and beam currents up to 200{mu}A. Beam shape, position cross-section and other parameters are displayed on a computer screen.
International Nuclear Information System (INIS)
Schiffer, J.P.
1989-01-01
Ions in a storage ring are confined to a mean orbit by focusing elements. To a first approximation these may be described by a constant harmonic restoring force: F = -Kr. If the particles in the frame moving along with the beam have small random thermal energies, then they will occupy a cylindrical volume around the mean orbit and the focusing force will be balanced by that from the mutual repulsion of the particles. Inside the cylinder only residual two-particle interactions will play a significant role and some form of ordering might be expected to take place. The results of some of the first MD calculations showed a surprising result: not only were the particles arranged in the form of a tube, but they formed well-defined layers: concentric shells, with the particles in each shell arranged in a hexagonal lattice that is characteristic of two-dimensional Coulomb systems. This paper discusses the condense layer structure
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Accelerator beam application in science and industry
International Nuclear Information System (INIS)
Hagiwara, M.
1996-01-01
Various accelerator beams are being used widely in science and industry. The area of their applications is so wide and rapidly expanding. This paper focuses on recent efforts made in the field of radiation chemistry, especially in materials development using electron and ion beams. Concerning the applications of electron beams, synthesis of SiC fibers, improvement of radiation resistance of polytetrafluoroethylene (PTFE) and preparation of an adsorbent for uranium recovery from sea water were described. In the synthesis of SiC, the electron beams were used effectively to cross-link precursor fibers to prevent their deformation upon heating for their pyrolysis to SiC fibers. The improvement of radiation resistance of PTFE was resulted successfully by its crosslinking. As to the preparation of the adsorbent for uranium recovery, chelating resins containing amidoxime groups were shown to work as a good adsorbent of uranium from sea water. The Takasaki Radiation Chemistry Research Establishment of JAERI completed the accelerator facility named TIARA for R and D of ion beam applications three years ago. Some results were presented on the studies about radiation effects on solar cells and LSIs for space use and synthesis of functional materials. Radiation resistance of solar cells was tested with both electron and proton beams using a beam scanning technique for the irradiation to a wide area, and ultra-fast transient current induced by heavy ion microbeam was measured for studies on mechanisms of single event upset (SEU) in LSIs. In the synthesis of organic functional materials, a temperature responsive particle track membrane was developed. Techniques for RBS and NRA using heavy ion beams were established for analyzing structures of multi-layered materials. Single crystalline thin film of diamond was successfully formed on Si substrate under the deposition of mass separated C-12 ions of 100 eV. (author)
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Beam-Beam Interaction Studies at LHC
Schaumann, Michaela; Alemany Fernandez, R
2011-01-01
The beam-beam force is one of the most important limiting factors in the performance of a collider, mainly in the delivered luminosity. Therefore, it is essential to measure the effects in LHC. Moreover, adequate understanding of LHC beam-beam interaction is of crucial importance in the design phases of the LHC luminosity upgrade. Due to the complexity of this topic the work presented in this thesis concentrates on the beam-beam tune shift and orbit effects. The study of the Linear Coherent Beam-Beam Parameter at the LHC has been determined with head-on collisions with small number of bunches at injection energy (450 GeV). For high bunch intensities the beam-beam force is strong enough to expect orbit effects if the two beams do not collide head-on but with a crossing angle or with a given offset. As a consequence the closed orbit changes. The closed orbit of an unperturbed machine with respect to a machine where the beam-beam force becomes more and more important has been studied and the results are as well ...
A note on a degenerate elliptic equation with applications for lakes and seas
Directory of Open Access Journals (Sweden)
Didier Bresch
2004-03-01
Full Text Available In this paper, we give an intermediate regularity result on a degenerate elliptic equation with a weight blowing up on the boundary. This kind of equations is encountoured when modelling some phenomena linked to seas or lakes. We give some examples where such regularity is useful.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Beam halo in high-intensity beams
International Nuclear Information System (INIS)
Wangler, T.P.
1993-01-01
In space-charge dominated beams the nonlinear space-charge forces produce a filamentation pattern, which in projection to the 2-D phase spaces results in a 2-component beam consisting of an inner core and a diffuse outer halo. The beam-halo is of concern for a next generation of cw, high-power proton linacs that could be applied to intense neutron generators for nuclear materials processing. The author describes what has been learned about beam halo and the evolution of space-charge dominated beams using numerical simulations of initial laminar beams in uniform linear focusing channels. Initial results are presented from a study of beam entropy for an intense space-charge dominated beam
Beam-beam issues in asymmetric colliders
International Nuclear Information System (INIS)
Furman, M.A.
1992-07-01
We discuss generic beam-beam issues for proposed asymmetric e + - e - colliders. We illustrate the issues by choosing, as examples, the proposals by Cornell University (CESR-B), KEK, and SLAC/LBL/LLNL (PEP-II)
Diversity and community structure of epibenthic invertebrates and fish in the North Sea
DEFF Research Database (Denmark)
Callaway, R.; Alsväg, J.; de Boois, I.
2002-01-01
The structure of North Sea benthic invertebrate and fish communities is an important indicator of anthropogenic and environmental impacts. Although North Sea fish stocks are monitored regularly, benthic fauna are not. Here, we report the results of a survey carried out in 2000, in which five...
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Regularities of radiorace formation in yeasts
International Nuclear Information System (INIS)
Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)
1977-01-01
Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Two-pass greedy regular expression parsing
DEFF Research Database (Denmark)
Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse
2013-01-01
We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Incremental projection approach of regularization for inverse problems
Energy Technology Data Exchange (ETDEWEB)
Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Dimensional regularization and analytical continuation at finite temperature
International Nuclear Information System (INIS)
Chen Xiangjun; Liu Lianshou
1998-01-01
The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Quality control in the geometry of the beam of a unit of volumetric modulated arcotheraphy
International Nuclear Information System (INIS)
Clemente Gutierrez, F.; Ramirez Ros, J. C.; Cabello Murillo, E.; Casa de Julian, M. A. de la
2011-01-01
We report here the evidence offered for the regular quality control (monthly) beam geometry (flatness and symmetry) for a unit of 6 MV Elekta Synergy with VMAT, belonging to the Radiation Oncology Service of the Defense Central Hospital Gomez Ulla.
Exclusion of children with intellectual disabilities from regular ...
African Journals Online (AJOL)
Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...
39 CFR 6.1 - Regular meetings, annual meeting.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
5 CFR 551.421 - Regular working hours.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...
20 CFR 226.35 - Deductions from regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
20 CFR 226.34 - Divorced spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
20 CFR 226.14 - Employee regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
Remote Sensing the Phytoplankton Seasonal Succession of the Red Sea
Raitsos, Dionysios E.; Pradhan, Yaswant; Brewin, Robert J. W.; Stenchikov, Georgiy L.; Hoteit, Ibrahim
2013-01-01
, and thus could provide an important source of nutrients to the open waters. Remotely-sensed synoptic observations highlight that Chl-a does not increase regularly from north to south as previously thought. The Northern part of the Central Red Sea province
Observations of sea turtles nesting on Misali islan, Pemba | Pharoah ...
African Journals Online (AJOL)
A nest-recording programme has collected data over five years from turtles nesting on Misali Island, off the West coast of Pemba, Tanzania. Five species of sea turtle are known to occur in Zanzibar waters, two of these species nested regularly on the island, with green turtle nests outnumbering hawksbill turtle nests by a ...
Irregular Reproductive Cycles in the Tongaland Loggerhead Sea ...
African Journals Online (AJOL)
The concept of sea-turtles exhibiting regular reproductive cycles is widely accepted. In Tongaland, Natal, after 12 years of research 2 122 female loggerhead turtles have been tagged and the recovery rate of tagged animals back on the nesting beaches has reached SO %. From a sample ofthcsc recoveries it is clear that ...
A symplectic coherent beam-beam model
International Nuclear Information System (INIS)
Furman, M.A.
1989-05-01
We consider a simple one-dimensional model to study the effects of the beam-beam force on the coherent dynamics of colliding beams. The key ingredient is a linearized beam-beam kick. We study only the quadrupole modes, with the dynamical variables being the 2nd-order moments of the canonical variables q, p. Our model is self-consistent in the sense that no higher order moments are generated by the linearized beam-beam kicks, and that the only source of violation of symplecticity is the radiation. We discuss the round beam case only, in which vertical and horizontal quantities are assumed to be equal (though they may be different in the two beams). Depending on the values of the tune and beam intensity, we observe steady states in which otherwise identical bunches have sizes that are equal, or unequal, or periodic, or behave chaotically from turn to turn. Possible implications of luminosity saturation with increasing beam intensity are discussed. Finally, we present some preliminary applications to an asymmetric collider. 8 refs., 8 figs
Yamazaki, Yasunori; Doser, Michael; Pérez, Patrice
2018-03-01
Why does our universe consist purely of matter, even though the same amount of antimatter and matter should have been produced at the moment of the Big Bang 13.8 billion years ago? One of the most potentially fruitful approaches to address the mystery is to study the properties of antihydrogen and antiprotons. Because they are both stable, we can in principle make measurement precision as high as we need to see differences between these antimatter systems and their matter counterparts, i.e. hydrogen and protons. This is the goal of cold antihydrogen research. To study a fundamental symmetry-charge, parity, and time reversal (CPT) symmetry-which should lead to identical spectra in hydrogen and antihydrogen, as well as the weak equivalence principle (WEP), cold antihydrogen research seeks any discrepancies between matter and antimatter, which might also offer clues to the missing antimatter mystery. Precision tests of CPT have already been carried out in other systems, but antihydrogen spectroscopy offers the hope of reaching even higher sensitivity to violations of CPT. Meanwhile, utilizing the Earth and antihydrogen atoms as an experimental system, the WEP predicts a gravitational interaction between matter and antimatter that is identical to that between any two matter objects. The WEP has been tested to very high precision for a range of material compositions, but no such precision test using antimatter has yet been carried out, offering hope of a telltale inconsistency between matter and antimatter. In this Discovery book, we invite you to visit the frontiers of cold antimatter research, focusing on new technologies to form beams of antihydrogen atoms and antihydrogen ions, and new ways of interrogating the properties of antimatter.
Beam Techniques - Beam Control and Manipulation
International Nuclear Information System (INIS)
Minty, Michiko G
2003-01-01
We describe commonly used strategies for the control of charged particle beams and the manipulation of their properties. Emphasis is placed on relativistic beams in linear accelerators and storage rings. After a brief review of linear optics, we discuss basic and advanced beam control techniques, such as transverse and longitudinal lattice diagnostics, matching, orbit correction and steering, beam-based alignment, and linac emittance preservation. A variety of methods for the manipulation of particle beam properties are also presented, for instance, bunch length and energy compression, bunch rotation, changes to the damping partition number, and beam collimation. The different procedures are illustrated by examples from various accelerators. Special topics include injection and extraction methods, beam cooling, spin transport and polarization
Beam Techniques - Beam Control and Manipulation
Energy Technology Data Exchange (ETDEWEB)
Minty, Michiko G
2003-04-24
We describe commonly used strategies for the control of charged particle beams and the manipulation of their properties. Emphasis is placed on relativistic beams in linear accelerators and storage rings. After a brief review of linear optics, we discuss basic and advanced beam control techniques, such as transverse and longitudinal lattice diagnostics, matching, orbit correction and steering, beam-based alignment, and linac emittance preservation. A variety of methods for the manipulation of particle beam properties are also presented, for instance, bunch length and energy compression, bunch rotation, changes to the damping partition number, and beam collimation. The different procedures are illustrated by examples from various accelerators. Special topics include injection and extraction methods, beam cooling, spin transport and polarization.
Literature in Focus Beta Beams: Neutrino Beams
2009-01-01
By Mats Lindroos (CERN) and Mauro Mezzetto (INFN Padova, Italy) Imperial Press, 2009 The beta-beam concept for the generation of electron neutrino beams was first proposed by Piero Zucchelli in 2002. The idea created quite a stir, challenging the idea that intense neutrino beams only could be produced from the decay of pions or muons in classical neutrino beams facilities or in future neutrino factories. The concept initially struggled to make an impact but the hard work by many machine physicists, phenomenologists and theoreticians over the last five years has won the beta-beam a well-earned position as one of the frontrunners for a possible future world laboratory for high intensity neutrino oscillation physics. This is the first complete monograph on the beta-beam concept. The book describes both technical aspects and experimental aspects of the beta-beam, providing students and scientists with an insight into the possibilities o...
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Nonlinear rolling of a biased ship in a regular beam wave under external and parametric excitations
Energy Technology Data Exchange (ETDEWEB)
El-Bassiouny, A.F. [Mathematics Dept., Benha Univ., Benha (Egypt)
2007-10-15
We consider a nonlinear oscillator simultaneously excited by external and parametric functions. The oscillator has a bias parameter that breaks the symmetry of the motion. The example that we use to illustrate the problem is the rolling oscillation of a biased ship in longitudinal waves, but many mechanical systems display similar features. The analysis took into consideration linear, quadratic, cubic, quintic, and seven terms in the polynomial expansion of the relative roll angle. The damping moment consists of the linear term associated with radiation and viscous damping and a cubic term due to frictional resistance and eddies behind bilge keels and hard bilge corners. Two methods (the averaging and the multiple time scales) are used to investigate the first-order approximate analytical solution. The modulation equations of the amplitudes and phases are obtained. These equations are used to obtain the stationary state. The stability of the proposed solution is determined applying Liapunov's first method. Effects of different parameters on the system behaviour are investigated numerically. Results are presented graphically and discussed. The results obtained by two methods are in excellent agreement. (orig.)
Accreting fluids onto regular black holes via Hamiltonian approach
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)
2017-08-15
We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.; Sudhakar, T.
In this paper an interface to acquire 59-beams echo peak amplitudes of the Hydrosweep Multibeam system is established. The echo peak amplitude values collected at varying seabed provinces of Arabian sea are presented. The study reveals...
AFSC/RACE/EcoFOCI: 2010 Eastern Bering Sea Juvenile Survey - 1MF10
National Oceanic and Atmospheric Administration, Department of Commerce — Data collected on this cruise included the following: We conducted a juvenile fish and benthic fish prey survery in the eastern Bering Sea (61 3-meter beam trawls,...
Obsolete - AFSC/RACE/Eco-FOCI: 2010 Eastern Bering Sea Juvenile Survey
National Oceanic and Atmospheric Administration, Department of Commerce — Data collected on this cruise included the following: We conducted a juvenile fish and benthic fish prey survery in the eastern Bering Sea (61 3-meter beam trawls,...
Factors governing the deep ventilation of the Red Sea
Papadopoulos, Vassilis P.
2015-11-19
A variety of data based on hydrographic measurements, satellite observations, reanalysis databases, and meteorological observations are used to explore the interannual variability and factors governing the deep water formation in the northern Red Sea. Historical and recent hydrographic data consistently indicate that the ventilation of the near-bottom layer in the Red Sea is a robust feature of the thermohaline circulation. Dense water capable to reach the bottom layers of the Red Sea can be regularly produced mostly inside the Gulfs of Aqaba and Suez. Occasionally, during colder than usual winters, deep water formation may also take place over coastal areas in the northernmost end of the open Red Sea just outside the Gulfs of Aqaba and Suez. However, the origin as well as the amount of deep waters exhibit considerable interannual variability depending not only on atmospheric forcing but also on the water circulation over the northern Red Sea. Analysis of several recent winters shows that the strength of the cyclonic gyre prevailing in the northernmost part of the basin can effectively influence the sea surface temperature (SST) and intensify or moderate the winter surface cooling. Upwelling associated with periods of persistent gyre circulation lowers the SST over the northernmost part of the Red Sea and can produce colder than normal winter SST even without extreme heat loss by the sea surface. In addition, the occasional persistence of the cyclonic gyre feeds the surface layers of the northern Red Sea with nutrients, considerably increasing the phytoplankton biomass.
Nudging the Arctic Ocean to quantify Arctic sea ice feedbacks
Dekker, Evelien; Severijns, Camiel; Bintanja, Richard
2017-04-01
It is well-established that the Arctic is warming 2 to 3 time faster than rest of the planet. One of the great uncertainties in climate research is related to what extent sea ice feedbacks amplify this (seasonally varying) Arctic warming. Earlier studies have analyzed existing climate model output using correlations and energy budget considerations in order to quantify sea ice feedbacks through indirect methods. From these analyses it is regularly inferred that sea ice likely plays an important role, but details remain obscure. Here we will take a different and a more direct approach: we will keep the sea ice constant in a sensitivity simulation, using a state-of -the-art climate model (EC-Earth), applying a technique that has never been attempted before. This experimental technique involves nudging the temperature and salinity of the ocean surface (and possibly some layers below to maintain the vertical structure and mixing) to a predefined prescribed state. When strongly nudged to existing (seasonally-varying) sea surface temperatures, ocean salinity and temperature, we force the sea ice to remain in regions/seasons where it is located in the prescribed state, despite the changing climate. Once we obtain fixed' sea ice, we will run a future scenario, for instance 2 x CO2 with and without prescribed sea ice, with the difference between these runs providing a measure as to what extent sea ice contributes to Arctic warming, including the seasonal and geographical imprint of the effects.
Mechanically reinforced glass beams
DEFF Research Database (Denmark)
Nielsen, Jens Henrik; Olesen, John Forbes
2007-01-01
laminated float glass beam is constructed and tested in four-point bending. The beam consist of 4 layers of glass laminated together with a slack steel band glued onto the bottom face of the beam. The glass parts of the tested beams are \\SI{1700}{mm} long and \\SI{100}{mm} high, and the total width of one...
Telecommunication using muon beams
International Nuclear Information System (INIS)
Arnold, R.C.
1976-01-01
Telecommunication is effected by generating a beam of mu mesons or muons, varying a property of the beam at a modulating rate to generate a modulated beam of muons, and detecting the information in the modulated beam at a remote location
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Regularization of the Coulomb scattering problem
International Nuclear Information System (INIS)
Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.
2004-01-01
The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers
Color correction optimization with hue regularization
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Wave dynamics of regular and chaotic rays
International Nuclear Information System (INIS)
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Library search with regular reflectance IR spectra
International Nuclear Information System (INIS)
Staat, H.; Korte, E.H.; Lampen, P.
1989-01-01
Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)
Regularities of praseodymium oxide dissolution in acids
International Nuclear Information System (INIS)
Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.
1989-01-01
The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented
Regular expressions compiler and some applications
International Nuclear Information System (INIS)
Saldana A, H.
1978-01-01
We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
International Nuclear Information System (INIS)
Uythoven, J.; Schmidt, R.
2007-01-01
Due to the large amount of energy stored in magnets and beams, safety operation of the LHC is essential. The commissioning of the LHC machine protection system will be an integral part of the general LHC commissioning program. A brief overview of the LHC Machine Protection System will be given, identifying the main components: the Beam Interlock System, the Beam Dumping System, the Collimation System, the Beam Loss Monitoring System and the Quench Protection System. An outline is given of the commissioning strategy of these systems during the different commissioning phases of the LHC: without beam, injection and the different phases with stored beam depending on beam intensity and energy. (author)
International Nuclear Information System (INIS)
Gallegos, F.R.
1996-01-01
The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described
Beam Loss Monitoring for LHC Machine Protection
Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos
The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.
Patrice Loïez
2002-01-01
In these images workers are digging the tunnels that will be used to dump the counter-circulating beams. Travelling just a fraction under the speed of light, the beams at the LHC will each carry the energy of an aircraft carrier travelling at 12 knots. In order to dispose of these beams safely, a beam dump is used to extract the beam and diffuse it before it collides with a radiation shielded graphite target.
International Nuclear Information System (INIS)
Strehl, P.
1994-04-01
This report is an introduction to ion beam diagnosis. After a short description of the most important ion beam parameters measurements of the beam current by means of Faraday cups, calorimetry, and beam current transformers and measurements of the beam profile by means of viewing screens, profile grids and scanning devices, and residual gas ionization monitors are described. Finally measurements in the transverse and longitudinal phase space are considered. (HSI)
National Oceanic and Atmospheric Administration, Department of Commerce — California sea lions pup and breed at four of the nine Channel Islands in southern California. Since 1981, SWFSC MMTD has been conducting a diet study of sea lions...
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Elementary Particle Spectroscopy in Regular Solid Rewrite
International Nuclear Information System (INIS)
Trell, Erik
2008-01-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Beam diagnostics for low energy beams
Directory of Open Access Journals (Sweden)
J. Harasimowicz
2012-12-01
Full Text Available Low-energetic ion and antimatter beams are very attractive for a number of fundamental studies. The diagnostics of such beams, however, is a challenge due to low currents down to only a few thousands of particles per second and significant fraction of energy loss in matter at keV beam energies. A modular set of particle detectors has been developed to suit the particular beam diagnostic needs of the ultralow-energy storage ring (USR at the future facility for low-energy antiproton and ion research, accommodating very low beam intensities at energies down to 20 keV. The detectors include beam-profile monitors based on scintillating screens and secondary electron emission, sensitive Faraday cups for absolute intensity measurements, and capacitive pickups for beam position monitoring. In this paper, the design of all detectors is presented in detail and results from beam measurements are shown. The resolution limits of all detectors are described and options for further improvement summarized. Whilst initially developed for the USR, the instrumentation described in this paper is also well suited for use in other low-intensity, low-energy accelerators, storage rings, and beam lines.
Studies of the beam-beam interaction for the LHC
International Nuclear Information System (INIS)
Krishnagopal, S.; Furman, M.A.; Turner, W.C.
1999-01-01
The authors have used the beam-beam simulation code CBI to study the beam-beam interaction for the LHC. We find that for nominal LHC parameters, and assuming only one bunch per beam, there are no collective (coherent) beam-beam instabilities. We have investigated the effect of sweeping one of the beams around the other (a procedure that could be used as a diagnostic for head-on beam-beam collisions). We find that this does not cause any problems at the nominal current, though at higher currents there can be beam blow-up and collective beam motion. consequence of quadrupole collective effects
Regularization of plurisubharmonic functions with a net of good points
Li, Long
2017-01-01
The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.
Energy Technology Data Exchange (ETDEWEB)
Saint, A; Laird, J S; Bardos, R A; Legge, G J.F. [Melbourne Univ., Parkville, VIC (Australia). School of Physics; Nishijima, T; Sekiguchi, H [Electrotechnical Laboratory, Tsukuba (Japan).
1994-12-31
Since the development of Scanning Transmission Microscopy (STIM) imaging in 1983 many low current beam techniques have been developed for the scanning (ion) microprobe. These include STIM tomography, Ion Beam Induced Current, Ion Beam Micromachining and Microlithography and Ionoluminense. Most of these techniques utilise beam currents of 10{sup -15} A down to single ions controlled by beam switching techniques This paper will discuss some of the low beam current techniques mentioned above, and indicate, some of their recent applications at MARC. A new STIM technique will be introduced that can be used to obtain Z-contrast with STIM resolution. 4 refs., 3 figs.
Energy Technology Data Exchange (ETDEWEB)
Saint, A.; Laird, J.S.; Bardos, R.A.; Legge, G.J.F. [Melbourne Univ., Parkville, VIC (Australia). School of Physics; Nishijima, T.; Sekiguchi, H. [Electrotechnical Laboratory, Tsukuba (Japan).
1993-12-31
Since the development of Scanning Transmission Microscopy (STIM) imaging in 1983 many low current beam techniques have been developed for the scanning (ion) microprobe. These include STIM tomography, Ion Beam Induced Current, Ion Beam Micromachining and Microlithography and Ionoluminense. Most of these techniques utilise beam currents of 10{sup -15} A down to single ions controlled by beam switching techniques This paper will discuss some of the low beam current techniques mentioned above, and indicate, some of their recent applications at MARC. A new STIM technique will be introduced that can be used to obtain Z-contrast with STIM resolution. 4 refs., 3 figs.
International Nuclear Information System (INIS)
Keller, Kai Johannes
2010-04-01
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Keller, Kai Johannes
2010-04-15
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Formation of Barents Sea Branch Water in the north-eastern Barents Sea
Directory of Open Access Journals (Sweden)
Vidar S. Lien
2013-09-01
Full Text Available The Barents Sea throughflow accounts for approximately half of the Atlantic Water advection to the Arctic Ocean, while the other half flows through Fram Strait. Within the Barents Sea, the Atlantic Water undergoes considerable modifications before entering the Arctic Ocean through the St. Anna Trough. While the inflow area in the south-western Barents Sea is regularly monitored, oceanographic data from the outflow area to the north-east are very scarce. Here, we use conductivity, temperature and depth data from August/September 2008 to describe in detail the water masses present in the downstream area of the Barents Sea, their spatial distribution and transformations. Both Cold Deep Water, formed locally through winter convection and ice-freezing processes, and Atlantic Water, modified mainly through atmospheric cooling, contribute directly to the Barents Sea Branch Water. As a consequence, it consists of a dense core characterized by a temperature and salinity maximum associated with the Atlantic Water, in addition to the colder, less saline and less dense core commonly referred to as the Barents Sea Branch Water core. The denser core likely constitutes a substantial part of the total flow, and it is more saline and considerably denser than the Fram Strait branch as observed within the St. Anna Trough. Despite the recent warming of the Barents Sea, the Barents Sea Branch Water is denser than observed in the 1990s, and the bottom water observed in the St. Anna Trough matches the potential density at 2000 m depth in the Arctic Ocean.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
International Nuclear Information System (INIS)
Schwartz, M.L.
1979-01-01
Study of Cenozoic Era sea levels shows a continual lowering of sea level through the Tertiary Period. This overall drop in sea level accompanied the Pleistocene Epoch glacio-eustatic fluctuations. The considerable change of Pleistocene Epoch sea level is most directly attributable to the glacio-eustatic factor, with a time span of 10 5 years and an amplitude or range of approximately 200 m. The lowering of sea level since the end of the Cretaceous Period is attributed to subsidence and mid-ocean ridges. The maximum rate for sea level change is 4 cm/y. At present, mean sea level is rising at about 3 to 4 mm/y. Glacio-eustacy and tectono-eustacy are the parameters for predicting sea level changes in the next 1 my. Glacio-eustatic sea level changes may be projected on the basis of the Milankovitch Theory. Predictions about tectono-eustatic sea level changes, however, involve predictions about future tectonic activity and are therefore somewhat difficult to make. Coastal erosion and sedimentation are affected by changes in sea level. Erosion rates for soft sediments may be as much as 50 m/y. The maximum sedimentation accumulation rate is 20 m/100 y
Measuring the sea quark polarization
International Nuclear Information System (INIS)
Makdisi, Y.
1993-01-01
Spin is a fundamental degree of freedom and measuring the spin structure functions of the nucleon should be a basic endeavor for hadron physics. Polarization experiments have been the domain of fixed target experiments. Over the years large transverse asymmetries have been observed where the prevailing QCD theories predicted little or no asymmetries, and conversely the latest deep inelastic scattering experiments of polarized leptons from polarized targets point to the possibility that little of the nucleon spin is carried by the valence quarks. The possibility of colliding high luminosity polarized proton beams in the Brookhaven Relativistic Heavy Ion Collider (RHIC) provides a great opportunity to extend these studies and systematically probe the spin dependent parton distributions specially to those reactions that are inaccessible to current experiments. This presentation focuses on the measurement of sea quark and possibly the strange quark polarization utilizing the approved RHIC detectors
Beam Dynamics and Beam Losses - Circular Machines
Kain, V
2016-01-01
A basic introduction to transverse and longitudinal beam dynamics as well as the most relevant beam loss mechanisms in circular machines will be presented in this lecture. This lecture is intended for physicists and engineers with little or no knowledge of this subject.
Electron beam instabilities in gyrotron beam tunnels
International Nuclear Information System (INIS)
Pedrozzi, M.; Alberti, S.; Hogge, J.P.; Tran, M.Q.; Tran, T.M.
1997-10-01
Electron beam instabilities occurring in a gyrotron electron beam can induce an energy spread which might significantly deteriorate the gyrotron efficiency. Three types of instabilities are considered to explain the important discrepancy found between the theoretical and experimental efficiency in the case of quasi-optical gyrotrons (QOG): the electron cyclotron maser instability, the Bernstein instability and the Langmuir instability. The low magnetic field gradient in drift tubes of QOG makes that the electron cyclotron maser instability can develop in the drift tube at very low electron beam currents. Experimental measurements show that with a proper choice of absorbing structures in the beam tunnel, this instability can be suppressed. At high beam currents, the electrostatic Bernstein instability can induce a significant energy spread at the entrance of the interaction region. The induced energy spread scales approximately linearly with the electron beam density and for QOG one observes that the beam density is significantly higher than the beam density of an equivalent cylindrical cavity gyrotron. (author) figs., tabs., refs
Beam forming system modernization at the MMF linac proton injector
Derbilov, V I; Nikulin, E S; Frolov, O T
2001-01-01
The isolation improvements of the beam forming system (BFS) of the MMF linac proton injector ion source are reported. The mean beam current and,accordingly, BFS electrode heating were increased when the MMF linac has began to operate regularly in long beam sessions with 50 Hz pulse repetition rate. That is why the BFS electrode high-voltage isolation that was made previously as two consequently and rigidly glued solid cylinder insulators has lost mechanical and electric durability. The substitution of large (160 mm) diameter cylinder insulator for four small diameter (20 mm) tubular rods has improved vacuum conditions in the space of beam forming and has allowed to operate without failures when beam currents being up to 250 mA and extraction and focusing voltage being up to 25 and 40 kV respectively. Moreover,the construction provides the opportunity of electrode axial move. The insulators are free from electrode thermal expansion mechanical efforts in a transverse direction.
Detecting regularities in soccer dynamics: A T-pattern approach
Directory of Open Access Journals (Sweden)
Valentino Zurloni
2014-01-01
Full Text Available La dinámica del juego en partidos de fútbol profesional es un fenómeno complejo que no ha estado resuelto de forma óptima a través delas vías tradicionales que han pretendido la cuantificación en deportes de equipo. El objetivo de este estudio es el de detectar la dinámica existente mediante un análisis de patrones temporales. Específicamente, se pretenden revelar las estructuras ocultas pero estables que subyacen a las situaciones interactivas que determinan las acciones de ataque en el fútbol. El planteamiento metodológico se basa en un diseño observacional, y con apoyo de registros digitales y análisis informatizados. Los datos se analizaron mediante el programa Theme 6 beta, el cual permite detectar la estructura temporaly secuencial de las series de datos, poniendo de manifiesto patrones que regular o irregularmente ocurren repetidamente en un período de observación. El Theme ha detectado muchos patrones temporales (T-patterns en los partidos de fútbol analizados. Se hallaron notables diferencias entre los partidos ganados y perdidos. El número de distintos T-patterns detectados fue mayor para los partidos perdidos, y menor para los ganados, mientras que el número de eventos codificados fue similar. El programa Theme y los T-patterns mejoran las posibilidades investigadoras respecto a un análisis de rendimiento basado en la frecuencia, y hacen que esta metodología sea eficaz para la investigación y constituya un apoyo procedimental en el análisis del deporte. Nuestros resultados indican que se requieren posteriores investigaciones relativas a posibles conexiones entre la detección de estas estructuras temporales y las observaciones humanas respecto al rendimiento en el fútbol. Este planteamiento sería un apoyo tanto para los miembros de los equipos como para los entrenadores, permitiendo alcanzar una mejor comprensión de la dinámica del juego y aportando una información que no ofrecen los métodos tradicionales.
Successful Beam-Beam Tuneshift Compensation
Energy Technology Data Exchange (ETDEWEB)
Bishofberger, Kip Aaron [Univ. of California, Los Angeles, CA (United States)
2005-01-01
The performance of synchrotron colliders has been limited by the beam-beam limit, a maximum tuneshift that colliding bunches could sustain. Due to bunch-to-bunch tune variation and intra-bunch tune spread, larger tuneshifts produce severe emittance growth. Breaking through this constraint has been viewed as impossible for several decades. This dissertation introduces the physics of ultra-relativistic synchrotrons and low-energy electron beams, with emphasis placed on the limits of the Tevatron and the needs of a tuneshift-compensation device. A detailed analysis of the Tevatron Electron Lens (T_{E}L) is given, comparing theoretical models to experimental data whenever possible. Finally, results of Tevatron operations with inclusion of the T_{E}L are presented and analyzed. It is shown that the T_{E}L provides a way to shatter the previously inescapable beam-beam limit.
BEAMS3D Neutral Beam Injection Model
Energy Technology Data Exchange (ETDEWEB)
Lazerson, Samuel
2014-04-14
With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.
Searching for Dark Photons with the SeaQuest Spectrometer
Uemura, Sho; SeaQuest Collaboration
2017-09-01
The existence of a dark sector, containing families of particles that do not couple directly to the Standard Model, is motivated as a possible model for dark matter. A ``dark photon'' - a massive vector boson that couples weakly to electric charge - is a common component of dark sector models. The SeaQuest spectrometer at Fermilab is designed to detect dimuon pairs produced by the interaction of a 120 GeV proton beam with a rotating set of thin fixed targets. An iron-filled magnet downstream of the target, 5 meters in length, serves as a beam dump. The SeaQuest spectrometer is sensitive to dark photons that are mostly produced in the beam dump and decay to dimuons, and a SeaQuest search for dark sector particles was approved as Fermilab experiment E1067. As part of E1067, a displaced-vertex trigger was built, installed and commissioned this year. This trigger uses two planes of extruded scintillators to identify dimuons originating far downstream of the target, and is sensitive to dark photons that travel deep inside the beam dump before decaying to dimuons. This trigger will be used to take data parasitically with the primary SeaQuest physics program. In this talk I will present the displaced-vertex trigger and its performance, and projected sensitivity from future running.
Novel multi-beam radiometers for accurate ocean surveillance
DEFF Research Database (Denmark)
Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.
2014-01-01
Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...
Plum, M.A.
2016-01-01
Beam loss is a critical issue in high-intensity accelerators, and much effort is expended during both the design and operation phases to minimize the loss and to keep it to manageable levels. As new accelerators become ever more powerful, beam loss becomes even more critical. Linacs for H- ion beams, such as the one at the Oak Ridge Spallation Neutron Source, have many more loss mechanisms compared to H+ (proton) linacs, such as the one being designed for the European Spallation Neutron Source. Interesting H- beam loss mechanisms include residual gas stripping, H+ capture and acceleration, field stripping, black-body radiation and the recently discovered intra-beam stripping mechanism. Beam halo formation, and ion source or RF turn on/off transients, are examples of beam loss mechanisms that are common for both H+ and H- accelerators. Machine protection systems play an important role in limiting the beam loss.
Charged corpuscular beam detector
Energy Technology Data Exchange (ETDEWEB)
Hikawa, H; Nishikawa, Y
1970-09-29
The present invention relates to a charged particle beam detector which prevents transient phenomena disturbing the path and focusing of a charged particle beam travelling through a mounted axle. The present invention provides a charged particle beam detector capable of decreasing its reaction to the charge in energy of the charged particle beam even if the relative angle between the mounted axle and the scanner is unstable. The detector is characterized by mounting electrically conductive metal pieces of high melting point onto the face of a stepped, heat-resistant electric insulating material such that the pieces partially overlap each other and individually provide electric signals, whereby the detector is no longer affected by the beam. The thickness of the metal piece is selected so that an eddy current is not induced therein by an incident beam, thus the incident beam is not affected. The detector is capable of detecting a misaligned beam since the metal pieces partially overlap each other.
Regular Breakfast and Blood Lead Levels among Preschool Children
Directory of Open Access Journals (Sweden)
Needleman Herbert
2011-04-01
Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.
Salish Sea Genetics - Salish Sea genetic inventory
National Oceanic and Atmospheric Administration, Department of Commerce — The Salish Sea comprises most of the Puget Sound water area. Marine species are generally assemblages of discrete populations occupying various ecological niches....
Advanced electron beam techniques
International Nuclear Information System (INIS)
Hirotsu, Yoshihiko; Yoshida, Yoichi
2007-01-01
After 100 years from the time of discovery of electron, we now have many applications of electron beam in science and technology. In this report, we review two important applications of electron beam: electron microscopy and pulsed-electron beam. Advanced electron microscopy techniques to investigate atomic and electronic structures, and pulsed-electron beam for investigating time-resolved structural change are described. (author)
Energy Technology Data Exchange (ETDEWEB)
Ekdahl, Carl August Jr. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-10-14
Beam dynamics issues are assessed for a new linear induction electron accelerator being designed for flash radiography of large explosively driven hydrodynamic experiments. Special attention is paid to equilibrium beam transport, possible emittance growth, and beam stability. It is concluded that a radiographic quality beam will be produced possible if engineering standards and construction details are equivalent to those on the present radiography accelerators at Los Alamos.
International Nuclear Information System (INIS)
Dolder, K.T.
1976-01-01
Many natural phenomena can only be properly understood if one has a detailed knowledge of interactions involving atoms, molecules, ions, electrons or photons. In the laboratory these processes are often studied by preparing beams of two types of particle and observing the reactions which occur when the beams intersect. Some of the more interesting of these crossed beam experiments and their results are discussed. Proposals to extend colliding beam techniques to high energy particle physics are also outlined. (author)
An Electromagnetic Beam Converter
DEFF Research Database (Denmark)
2009-01-01
The present invention relates to an electromagnetic beam converter and a method for conversion of an input beam of electromagnetic radiation having a bell shaped intensity profile a(x,y) into an output beam having a prescribed target intensity profile l(x',y') based on a further development...
International Nuclear Information System (INIS)
Mosher, D.; Cooperstein, G.
1993-01-01
This report contains papers on the following topics: Ion beam papers; electron beam, bremsstrahlung, and diagnostics papers; radiating Z- pinch papers; microwave papers; electron laser papers; advanced accelerator papers; beam and pulsed power applications papers; pulsed power papers; and these papers have been indexed separately elsewhere
International Nuclear Information System (INIS)
Berger, H.; Herr, H.; Linnecar, T.; Millich, A.; Milss, F.; Rubbia, C.; Taylor, C.S.; Meer, S. van der; Zotter, B.
1980-01-01
The group concerned itself with the analysis of cooling systems whose purpose is to maintain the quality of the high energy beams in the SPS in spite of gas scattering, RF noise, magnet ripple and beam-beam interactions. Three types of systems were discussed. The status of these activities is discussed below. (orig.)
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Tur\\'an type inequalities for regular Coulomb wave functions
Baricz, Árpád
2015-01-01
Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.
Regularization and Complexity Control in Feed-forward Networks
Bishop, C. M.
1995-01-01
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces
F. Vallentin (Frank)
2008-01-01
htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite
Degree-regular triangulations of torus and Klein bottle
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.
Adaptive Regularization of Neural Networks Using Conjugate Gradient
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...
Strictly-regular number system and data structures
DEFF Research Database (Denmark)
Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki
2010-01-01
We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...
Inclusion Professional Development Model and Regular Middle School Educators
Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo
2014-01-01
The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…
The equivalence problem for LL- and LR-regular grammars
Nijholt, Antinus; Gecsec, F.
It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular
The Effects of Regular Exercise on the Physical Fitness Levels
Kirandi, Ozlem
2016-01-01
The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…
Regular perturbations in a vector space with indefinite metric
International Nuclear Information System (INIS)
Chiang, C.C.
1975-08-01
The Klein space is discussed in connection with practical applications. Some lemmas are presented which are to be used for the discussion of regular self-adjoint operators. The criteria for the regularity of perturbed operators are given. (U.S.)
Pairing renormalization and regularization within the local density approximation
International Nuclear Information System (INIS)
Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.
2006-01-01
We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
Regularity conditions of the field on a toroidal magnetic surface
International Nuclear Information System (INIS)
Bouligand, M.
1985-06-01
We show that a field B vector which is derived from an analytic canonical potential on an ordinary toroidal surface is regular on this surface when the potential satisfies an elliptic equation (owing to the conservative field) subject to certain conditions of regularity of its coefficients [fr
47 CFR 76.614 - Cable television system regular monitoring.
2010-10-01
...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...
Analysis of regularized Navier-Stokes equations, 2
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
20 CFR 226.33 - Spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...
Beam-beam interaction working group summary
International Nuclear Information System (INIS)
Siemann, R.H.
1995-01-01
The limit in hadron colliders is understood phenomenologically. The beam-beam interaction produces nonlinear resonances and makes the transverse tunes amplitude dependent. Tune spreads result from the latter, and as long as these tune spreads do not overlap low order resonances, the lifetime and performance is acceptable. Experience is that tenth and sometimes twelfth order resonances must be avoided, and the hadron collider limit corresponds roughly to the space available between resonances of that and lower order when operating near the coupling resonance. The beam-beam interaction in e + e - colliders is not understood well. This affects the performance of existing colliders and could lead to surprises in new ones. For example. a substantial amount of operator tuning is usually required to reach the performance limit given above, and this tuning has to be repeated after each major shutdown. The usual interpretation is that colliding beam performance is sensitive to small lattice errors, and these are being reduced during tuning. It is natural to ask what these errors are, how can a lattice be characterized to minimize tuning time, and what aspects of a lattice should receive particular attention when a new collider is being designed. The answers to this type of question are not known, and developing ideas for calculations, simulations and experiments that could illuminate the details of the beam-beam interaction was the primary working group activity
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Optimal behaviour can violate the principle of regularity.
Trimmer, Pete C
2013-07-22
Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
Laplacian manifold regularization method for fluorescence molecular tomography
He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei
2017-04-01
Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.
Ice and AIS: ship speed data and sea ice forecasts in the Baltic Sea
Directory of Open Access Journals (Sweden)
U. Löptien
2014-12-01
Full Text Available The Baltic Sea is a seasonally ice-covered marginal sea located in a densely populated area in northern Europe. Severe sea ice conditions have the potential to hinder the intense ship traffic considerably. Thus, sea ice fore- and nowcasts are regularly provided by the national weather services. Typically, the forecast comprises several ice properties that are distributed as prognostic variables, but their actual usefulness is difficult to measure, and the ship captains must determine their relative importance and relevance for optimal ship speed and safety ad hoc. The present study provides a more objective approach by comparing the ship speeds, obtained by the automatic identification system (AIS, with the respective forecasted ice conditions. We find that, despite an unavoidable random component, this information is useful to constrain and rate fore- and nowcasts. More precisely, 62–67% of ship speed variations can be explained by the forecasted ice properties when fitting a mixed-effect model. This statistical fit is based on a test region in the Bothnian Sea during the severe winter 2011 and employs 15 to 25 min averages of ship speed.
Cazenave, A. A.
2017-12-01
During recent decades, the Arctic region has warmed at a rate about twice the rest of the globe. Sea ice melting is increasing and the Greenland ice sheet is losing mass at an accelerated rate. Arctic warming, decrease in the sea ice cover and fresh water input to the Arctic ocean may eventually impact the Arctic sea level. In this presentation, we review our current knowledge of contemporary Arctic sea level changes. Until the beginning of the 1990s, Arctic sea level variations were essentially deduced from tide gauges located along the Russian and Norwegian coastlines. Since then, high inclination satellite altimetry missions have allowed measuring sea level over a large portion of the Arctic Ocean (up to 80 degree north). Measuring sea level in the Arctic by satellite altimetry is challenging because the presence of sea ice cover limits the full capacity of this technique. However adapted processing of raw altimetric measurements significantly increases the number of valid data, hence the data coverage, from which regional sea level variations can be extracted. Over the altimetry era, positive trend patterns are observed over the Beaufort Gyre and along the east coast of Greenland, while negative trends are reported along the Siberian shelf. On average over the Arctic region covered by satellite altimetry, the rate of sea level rise since 1992 is slightly less than the global mea sea level rate (of about 3 mm per year). On the other hand, the interannual variability is quite significant. Space gravimetry data from the GRACE mission and ocean reanalyses provide information on the mass and steric contributions to sea level, hence on the sea level budget. Budget studies show that regional sea level trends over the Beaufort Gyre and along the eastern coast of Greenland, are essentially due to salinity changes. However, in terms of regional average, the net steric component contributes little to the observed sea level trend. The sea level budget in the Arctic
The future for the Global Sea Level Observing System (GLOSS) Sea Level Data Rescue
Bradshaw, Elizabeth; Matthews, Andrew; Rickards, Lesley; Aarup, Thorkild
2016-04-01
Historical sea level data are rare and unrepeatable measurements with a number of applications in climate studies (sea level rise), oceanography (ocean currents, tides, surges), geodesy (national datum), geophysics and geology (coastal land movements) and other disciplines. However, long-term time series are concentrated in the northern hemisphere and there are no records at the Permanent Service for Mean Sea Level (PSMSL) global data bank longer than 100 years in the Arctic, Africa, South America or Antarctica. Data archaeology activities will help fill in the gaps in the global dataset and improve global sea level reconstruction. The Global Sea Level Observing System (GLOSS) is an international programme conducted under the auspices of the WMO-IOC Joint Technical Commission for Oceanography and Marine Meteorology. It was set up in 1985 to collect long-term tide gauge observations and to develop systems and standards "for ocean monitoring and flood warning purposes". At the GLOSS-GE-XIV Meeting in 2015, GLOSS agreed on a number of action items to be developed in the next two years. These were: 1. To explore mareogram digitisation applications, including NUNIEAU (more information available at: http://www.mediterranee.cerema.fr/logiciel-de-numerisation-des-enregistrements-r57.html) and other recent developments in scanning/digitisation software, such as IEDRO's Weather Wizards program, to see if they could be used via a browser. 2. To publicise sea level data archaeology and rescue by: • maintaining and regularly updating the Sea Level Data Archaeology page on the GLOSS website • strengthening links to the GLOSS data centres and data rescue organisations e.g. linking to IEDRO, ACRE, RDA • restarting the sea level data rescue blog with monthly posts. 3. Investigate sources of funding for data archaeology and rescue projects. 4. Propose "Guidelines" for rescuing sea level data. These action items will aid the discovery, scanning, digitising and quality control
Measurement of the sea surface wind speed and direction by an airborne microwave radar altimeter
Energy Technology Data Exchange (ETDEWEB)
Nekrassov, A. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
A pilot needs operational information about wind over sea as well as wave height to provide safety of a hydroplane landing on water. Near-surface wind speed and direction can be obtained with an airborne microwave scatterometer, radar designed for measuring the scatter characteristics of a surface. Mostly narrow-beam antennas are applied for such wind measurement. Unfortunately, a microwave narrow-beam antenna has considerable size that hampers its placing on flying apparatus. In this connection, a possibility to apply a conventional airborne radar altimeter as a scatterometer with a nadir-looking wide-beam antenna in conjunction with Doppler filtering for recovering the wind vector over sea is discussed, and measuring algorithms of sea surface wind speed and direction are proposed. The obtained results can be used for creation of an airborne radar system for operational measurement of the sea roughness characteristics and for safe landing of a hydroplane on water. (orig.)
Arctic Sea Level Reconstruction
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde
Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...
Sedimentation rate in Ariake Sea
International Nuclear Information System (INIS)
Momoshima, Noriyuki; Nishio, Souma; Honza, Eiichi
2004-01-01
The Ariake Sea is a shallow and almost enclosed sea located in western Kyushu, Japan with an area of about 1,700 km 2 and the deepest up to 30 m at north area. The most inner part of the bay area is very shallow and during low tide big mudflats tideland appears and extends up to several km. The tidal range is the highest in Japan with a maximum of about 6 m. The area is one of Japan's most important area for fishery, with over 40% of the total seaweed production in Japan In the year 2001, due to environmental conditions, the seaweed population decreased substantially with a production drop of about 50%. This was caused by an earlier winter outbreak of red tide that affected the seaweed quality. One proposed cause for this decline might be the land reclamation project in the western part of Ariake Sea, Isahaya Bay. This project started in April 1997 were more than 3,000 ha of the bay where closed by a 7 km long seawall. Contaminated water is regularly discharged from the reservoir inside the dike, which have resulted in changes in water flows and perhaps a decrease in tidal range. In 2002, the gates at the dike were open for two months for a survey campaign and the seaweed harvest in the winter 2002-2003 was quite good. However, the problem may be linked to totally different causes, e.g. increase in industrial pollution discharge, chemicals used in the disinfection methods of washing seaweed, or change in water pH after the volcanic eruptions of the Unzen mountain in 1992 and 1993. The purpose of the research is to elucidate present condition of the Ariake Sea and past history using by radiometric methods, and obtained useful information will resolve the environmental status of Ariake Sea and give us answers way to save the Ariake Sea. Sea sediment cores were taken on board in 2003 at several points covering the Ariake sea. Two cores taken in inner area of the sea were sectioned at every 2 cm intervals and subjected to gamma spectrometry to determine sedimentation
International Nuclear Information System (INIS)
Dracos, Marcos
2011-01-01
Neutrino Super Beams use conventional techniques to significantly increase the neutrino beam intensity compared to the present neutrino facilities. An essential part of these facilities is an intense proton driver producing a beam power higher than a MW. The protons hit a target able to accept the high proton beam intensity. The produced charged particles are focused by a system of magnetic horns towards the experiment detectors. The main challenge of these projects is to deal with the high beam intensity for many years. New high power neutrino facilities could be build at CERN profiting from an eventual construction of a high power proton driver. The European FP7 Design Study EUROv, among other neutrino beams, studies this Super Beam possibility. This paper will give the latest developments in this direction.
Stoller, D; Muterspaugh, M W; Pollock, R E
1999-01-01
A beam profile monitor based on the deflection of a probe electron beam by the electric field of a stored, electron-cooled proton beam is described and first results are presented. Electrons were transported parallel to the proton beam by a uniform longitudinal magnetic field. The probe beam may be slowly scanned across the stored beam to determine its intensity, position, and size. Alternatively, it may be scanned rapidly over a narrow range within the interior of the stored beam for continuous observation of the changing central density during cooling. Examples of a two dimensional charge density profile obtained from a raster scan and of a cooling alignment study illustrate the scope of measurements made possible by this device.
DEFF Research Database (Denmark)
Stoeglehner, G.; Brown, A.L.; Kørnøv, Lone
2009-01-01
, and the relationship of the SEA to the planning activity itself. This paper focuses on the influence that planners have in these implementation processes, postulating the hypothesis that these are key players in achieving effectiveness in SEA. Based upon implementation theory and empirical experience, the paper......As the field of strategic environmental assessment (SEA) has matured, the focus has moved from the development of legislation, guidelines and methodologies towards improving the effectiveness of SEA. Measuring and of course achieving effectiveness is both complex and challenging. This paper...
Study of salinity in aqueous medium using X-Ray beam with MCNP-X code
Energy Technology Data Exchange (ETDEWEB)
Barbosa, Caroline M.; Braz, Delson [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Salgado, César M., E-mail: cbarbosa@nuclear.ufrj.br, E-mail: delson@nuclear.ufrj.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2017-07-01
In offshore production, it is possible that the produced water presents geochemical characteristics that correspond to the mixture of formation water (connate water) and the sea water (injection water), and the physical-chemical behavior of the injected water allows a considerable variation in the index of salinity altering the water/oil ratio during transportation and/or extraction. Injection water is generally used to raise the reservoir pressure, increasing the percentage of extracted oil. This water has a significant amount of salts that generate some difficulties, such as measuring fractions of volume in multiphase systems. One way to check the effects of salinity would be to regularly measure the amount of salt present in the water. In this way, this work presents a methodology to measure the concentration and the types of salts using nuclear techniques through the MCNP-X computational code. The measurement geometry uses an X-ray beam (40-100 keV) and NaI(Tl) scintillation detector positioned diametrically opposed to the source. The studied samples were the NaCl, KCl and MgCl{sub 2} salts in aqueous solution. The results present the possibility of differentiating the formation and injection waters due to differences in the salt concentrations. (author)
Baltic Earth - Earth System Science for the Baltic Sea Region
Meier, Markus; Rutgersson, Anna; Lehmann, Andreas; Reckermann, Marcus
2014-05-01
for the Baltic Sea 1960-2100 • Outreach and Communication • Education The issue of anthropogenic changes and impacts on the Earth system of the Baltic Sea region is recognized as a major topic, and shall receive special attention. The intention of the "Outreach and Communication" and "Education" groups will be to initiate and design potential outreach activities and to provide an arena for scientific exchange and discussion around the Baltic Sea, to communicate findings and exchange views within the Baltic Earth research community internally and to other researchers and society, both professionals and non-professionals. A regular international Baltic Earth Summer School shall be established from 2015. There will be a strong continuity related to BALTEX in infrastructure (secretariat, conferences, publications) and the network (people and institutions).
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
Learning regularization parameters for general-form Tikhonov
International Nuclear Information System (INIS)
Chung, Julianne; Español, Malena I
2017-01-01
Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Remote sensing the phytoplankton seasonal succession of the Red Sea.
Raitsos, Dionysios E; Pradhan, Yaswant; Brewin, Robert J W; Stenchikov, Georgiy; Hoteit, Ibrahim
2013-01-01
The Red Sea holds one of the most diverse marine ecosystems, primarily due to coral reefs. However, knowledge on large-scale phytoplankton dynamics is limited. Analysis of a 10-year high resolution Chlorophyll-a (Chl-a) dataset, along with remotely-sensed sea surface temperature and wind, provided a detailed description of the spatiotemporal seasonal succession of phytoplankton biomass in the Red Sea. Based on MODIS (Moderate-resolution Imaging Spectroradiometer) data, four distinct Red Sea provinces and seasons are suggested, covering the major patterns of surface phytoplankton production. The Red Sea Chl-a depicts a distinct seasonality with maximum concentrations seen during the winter time (attributed to vertical mixing in the north and wind-induced horizontal intrusion of nutrient-rich water in the south), and minimum concentrations during the summer (associated with strong seasonal stratification). The initiation of the seasonal succession occurs in autumn and lasts until early spring. However, weekly Chl-a seasonal succession data revealed that during the month of June, consistent anti-cyclonic eddies transfer nutrients and/or Chl-a to the open waters of the central Red Sea. This phenomenon occurs during the stratified nutrient depleted season, and thus could provide an important source of nutrients to the open waters. Remotely-sensed synoptic observations highlight that Chl-a does not increase regularly from north to south as previously thought. The Northern part of the Central Red Sea province appears to be the most oligotrophic area (opposed to southern and northern domains). This is likely due to the absence of strong mixing, which is apparent at the northern end of the Red Sea, and low nutrient intrusion in comparison with the southern end. Although the Red Sea is considered an oligotrophic sea, sporadic blooms occur that reach mesotrophic levels. The water temperature and the prevailing winds control the nutrient concentrations within the euphotic zone
Remote Sensing the Phytoplankton Seasonal Succession of the Red Sea
Raitsos, Dionysios E.
2013-06-05
The Red Sea holds one of the most diverse marine ecosystems, primarily due to coral reefs. However, knowledge on large-scale phytoplankton dynamics is limited. Analysis of a 10-year high resolution Chlorophyll-a (Chl-a) dataset, along with remotely-sensed sea surface temperature and wind, provided a detailed description of the spatiotemporal seasonal succession of phytoplankton biomass in the Red Sea. Based on MODIS (Moderate-resolution Imaging Spectroradiometer) data, four distinct Red Sea provinces and seasons are suggested, covering the major patterns of surface phytoplankton production. The Red Sea Chl-a depicts a distinct seasonality with maximum concentrations seen during the winter time (attributed to vertical mixing in the north and wind-induced horizontal intrusion of nutrient-rich water in the south), and minimum concentrations during the summer (associated with strong seasonal stratification). The initiation of the seasonal succession occurs in autumn and lasts until early spring. However, weekly Chl-a seasonal succession data revealed that during the month of June, consistent anti-cyclonic eddies transfer nutrients and/or Chl-a to the open waters of the central Red Sea. This phenomenon occurs during the stratified nutrient depleted season, and thus could provide an important source of nutrients to the open waters. Remotely-sensed synoptic observations highlight that Chl-a does not increase regularly from north to south as previously thought. The Northern part of the Central Red Sea province appears to be the most oligotrophic area (opposed to southern and northern domains). This is likely due to the absence of strong mixing, which is apparent at the northern end of the Red Sea, and low nutrient intrusion in comparison with the southern end. Although the Red Sea is considered an oligotrophic sea, sporadic blooms occur that reach mesotrophic levels. The water temperature and the prevailing winds control the nutrient concentrations within the euphotic zone
Forcing of a bottom-mounted circular cylinder by steep regular water waves at finite depth
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
of secondary load cycles. Special attention was paid to this secondary load cycle and the flow features that cause it. By visual observation and a simplified analytical model it was shown that the secondary load cycle was caused by the strong nonlinear motion of the free surface which drives a return flow......Forcing by steep regular water waves on a vertical circular cylinder at finite depth was investigated numerically by solving the two-phase incompressible Navier–Stokes equations. Consistently with potential flow theory, boundary layer effects were neglected at the sea bed and at the cylinder...... at the back of the cylinder following the passage of the wave crest. The numerical computations were further analysed in the frequency domain. For a representative example, the secondary load cycle was found to be associated with frequencies above the fifth- and sixth-harmonic force component. For the third...
Guide for External Beam Radiotherapy. Procedures 2007
International Nuclear Information System (INIS)
Ardiet, Jean-Michel; Bourhis, Jean; Eschwege, Francois; Gerard, Jean-Pierre; Martin, Philippe; Mazeron, Jean-Jacques; Barillot, Isabelle; Bey, Pierre; Cosset, Jean-Marc; Thomas, Olivier; Bolla, Michel; Bourguignon, Michel; Godet, Jean-Luc; Krembel, David; Valero, Marc; Bara, Christine; Beauvais-March, Helene; Derreumaux, Sylvie; Vidal, Jean-Pierre; Drouard, Jean; Sarrazin, Thierry; Lindecker-Cournil, Valerie; Robin, Sun Hee Lee; Thevenet, Nicolas; Depenweiller, Christian; Le Tallec, Philippe; Ortholan, Cecile; Aimone, Nicole; Baldeschi, Carine; Cantelli, Andree; Estivalet, Stephane; Le Prince, Cyrille; QUERO, Laurent; Costa, Andre; Gerard, Jean-Pierre; Ardiet, Jean-Michel; Bensadoun, Rene-Jean; Bourhis, Jean; Calais, Gilles; Lartigau, Eric; Ginot, Aurelie; Girard, Nicolas; Mornex, Francoise; Bolla, Michel; Chauvet, Bruno; Maingon, Philippe; Martin, Etienne; Azria, David; Gerard, Jean-Pierre; Grehange, Gilles; Hennequin, Christophe; Peiffert, Didier; Toledano, Alain; Belkacemi, Yazid; Courdi, Adel; Belliere, Aurelie; Peignaux, Karine; Mahe, Marc; Bondiau, Pierre-Yves; Kantor, Guy; Lepechoux, Cecile; Carrie, Christian; Claude, Line
2007-01-01
In order to optimize quality and security in the delivery of radiation treatment, the French SFRO (Societe francaise de radiotherapie oncologique) is publishing a Guide for Radiotherapy. This guide is realized according to the HAS (Haute Autorite de sante) methodology of 'structured experts consensus'. This document is made of two parts: a general description of external beam radiation therapy and chapters describing the technical procedures of the main tumors to be irradiated (24). For each procedure, a special attention is given to dose constraints in the organs at risk. This guide will be regularly updated
Closedness type regularity conditions in convex optimization and beyond
Directory of Open Access Journals (Sweden)
Sorin-Mihai Grad
2016-09-01
Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.
Capped Lp approximations for the composite L0 regularization problem
Li, Qia; Zhang, Na
2017-01-01
The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out tha...
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Fluctuations of quantum fields via zeta function regularization
International Nuclear Information System (INIS)
Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio
2002-01-01
Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed
Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Regularization theory for ill-posed problems selected topics
Lu, Shuai
2013-01-01
Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs
Bolt beam propagation analysis
Shokair, I. R.
BOLT (Beam on Laser Technology) is a rocket experiment to demonstrate electron beam propagation on a laser ionized plasma channel across the geomagnetic field in the ion focused regime (IFR). The beam parameters for BOLT are: beam current I(sub b) = 100 Amps, beam energy of 1--1.5 MeV (gamma =3-4), and a Gaussian beam and channel of radii r(sub b) = r(sub c) = 1.5 cm. The N+1 ionization scheme is used to ionize atomic oxygen in the upper atmosphere. This scheme utilizes 130 nm light plus three IR lasers to excite and then ionize atomic oxygen. The limiting factor for the channel strength is the energy of the 130 nm laser, which is assumed to be 1.6 mJ for BOLT. At a fixed laser energy and altitude (fixing the density of atomic oxygen), the range can be varied by adjusting the laser tuning, resulting in a neutralization fraction axial profile of the form: f(z) = f(sub 0) e(exp minus z)/R, where R is the range. In this paper we consider the propagation of the BOLT beam and calculate the range of the electron beam taking into account the fact that the erosion rates (magnetic and inductive) vary with beam length as the beam and channel dynamically respond to sausage and hose instabilities.
National Oceanic and Atmospheric Administration, Department of Commerce — Surface temperatures and salinities were collected in the Barents Sea, Sea of Japan, North Atlantic Ocean, Philippine Sea, Red Sea, and South China Sea (Nan Hai)...
DEFF Research Database (Denmark)
Lyhne, Ivar
Dilemmas in SEA Application: The DK Energy SectorIvar Lyhne - lyhne@plan.aau.dk. Based on three years of collaborative research, this paper outlines dilemmas in the application of SEA in the strategic development of the Danish energy sector. The dilemmas are based on concrete examples from practice...
On the theory of drainage area for regular and non-regular points
Bonetti, S.; Bragg, A. D.; Porporato, A.
2018-03-01
The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi
2014-01-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both
A Regularized Algorithm for the Proximal Split Feasibility Problem
Directory of Open Access Journals (Sweden)
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Anaemia in Patients with Diabetes Mellitus attending regular ...
African Journals Online (AJOL)
Anaemia in Patients with Diabetes Mellitus attending regular Diabetic ... Nigerian Journal of Health and Biomedical Sciences ... some patients may omit important food items in their daily diet for fear of increasing their blood sugar level.
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong
2015-09-18
In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.
Body composition, disordered eating and menstrual regularity in a ...
African Journals Online (AJOL)
Body composition, disordered eating and menstrual regularity in a group of South African ... e between body composition and disordered eating in irregular vs normal menstruating athletes. ... measured by air displacement plethysmography.
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi; Jin, Bangti
2011-01-01
operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a
Supporting primary school teachers in differentiating in the regular classroom
Eysink, Tessa H.S.; Hulsbeek, Manon; Gijlers, Hannie
Many primary school teachers experience difficulties in effectively differentiating in the regular classroom. This study investigated the effect of the STIP-approach on teachers' differentiation activities and self-efficacy, and children's learning outcomes and instructional value. Teachers using
Lavrentiev regularization method for nonlinear ill-posed problems
International Nuclear Information System (INIS)
Kinh, Nguyen Van
2002-10-01
In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin; Dai, Wei; Schuster, Gerard T.
2013-01-01
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2015-01-01
plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing
DEFF Research Database (Denmark)
Gao, Jingjing; Kørnøv, Lone; Christensen, Per
Abstract: Indicators are widely used in SEA to measure, communicate and monitor impacts from a proposed policy, plan or programme, and can improve the effectiveness for the SEA by simplifying the complexity of both assessment and presentation. Indicators can be seen as part of the implementation...... and if the information requirement for different target groups is not addressed. Indicators are widely used in SEA to measure, communicate and monitor impacts from a proposed policy, plan or programme, and can improve the effectiveness for the SEA by simplifying the complexity of both assessment and presentation...... process helping to understand, communicate and, integrate important environmental issues in planning and decision-making. On the other hand, use of indicators can also limit SEA effectiveness, if the ones chosen are biased or limited, if the aggregation gives incorrect interpretation...
Karplus, Alan K.
1996-01-01
The objective of this exercise is to provide a phenomenological 'hands-on' experience that shows how geometry can affect the load carrying capacity of a material used in construction, how different materials have different failure characteristics, and how construction affects the performance of a composite material. This will be accomplished by building beams of a single material and composite beams of a mixture of materials (popsicle sticks, fiberboard sheets, and tongue depressors); testing these layered beams to determine how and where they fail; and based on the failure analysis, designing a layered beam that will fail in a predicted manner. The students will learn the effects of lamination, adhesion, and geometry in layered beam construction on beam strength and failure location.
DEFF Research Database (Denmark)
Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove
2007-01-01
the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...
Salvant, B; Arduini, G; Assmann, R; Baglin, V; Barnes, M J; Bartmann, W; Baudrenghien, P; Berrig, O; Bracco, C; Bravin, E; Bregliozzi, G; Bruce, R; Bertarelli, A; Carra, F; Cattenoz, G; Caspers, F; Claudet, S; Day, H; Garlasche, M; Gentini, L; Goddard, B; Grudiev, A; Henrist, B; Jones, R; Kononenko, O; Lanza, G; Lari, L; Mastoridis, T; Mertens, V; Métral, E; Mounet, N; Muller, J E; Nosych, A A; Nougaret, J L; Persichelli, S; Piguiet, A M; Redaelli, S; Roncarolo, F; Rumolo, G; Salvachua, B; Sapinski, M; Schmidt, R; Shaposhnikova, E; Tavian, L; Timmins, M; Uythoven, J; Vidal, A; Wenninger, J; Wollmann, D; Zerlauth, M
2012-01-01
After the 2011 run, actions were put in place during the 2011/2012 winter stop to limit beam induced radio frequency (RF) heating of LHC components. However, some components could not be changed during this short stop and continued to represent a limitation throughout 2012. In addition, the stored beam intensity increased in 2012 and the temperature of certain components became critical. In this contribution, the beam induced heating limitations for 2012 and the expected beam induced heating limitations for the restart after the Long Shutdown 1 (LS1) will be compiled. The expected consequences of running with 25 ns or 50 ns bunch spacing will be detailed, as well as the consequences of running with shorter bunch length. Finally, actions on hardware or beam parameters to monitor and mitigate the impact of beam induced heating to LHC operation after LS1 will be discussed.
The SeaQuest Spectrometer at Fermilab
Energy Technology Data Exchange (ETDEWEB)
Aidala, C.A.; et al.
2017-06-29
The SeaQuest spectrometer at Fermilab was designed to detect oppositely-charged pairs of muons (dimuons) produced by interactions between a 120 GeV proton beam and liquid hydrogen, liquid deuterium and solid nuclear targets. The primary physics program uses the Drell-Yan process to probe antiquark distributions in the target nucleon. The spectrometer consists of a target system, two dipole magnets and four detector stations. The upstream magnet is a closed-aperture solid iron magnet which also serves as the beam dump, while the second magnet is an open aperture magnet. Each of the detector stations consists of scintillator hodoscopes and a high-resolution tracking device. The FPGA-based trigger compares the hodoscope signals to a set of pre-programmed roads to determine if the event contains oppositely-signed, high-mass muon pairs.
Regularization method for solving the inverse scattering problem
International Nuclear Information System (INIS)
Denisov, A.M.; Krylov, A.S.
1985-01-01
The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table
Viscous Regularization of the Euler Equations and Entropy Principles
Guermond, Jean-Luc
2014-03-11
This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.
Dimensional versus lattice regularization within Luescher's Yang Mills theory
International Nuclear Information System (INIS)
Diekmann, B.; Langer, M.; Schuette, D.
1993-01-01
It is pointed out that the coefficients of Luescher's effective model space Hamiltonian, which is based upon dimensional regularization techniques, can be reproduced by applying folded diagram perturbation theory to the Kogut Susskind Hamiltonian and by performing a lattice continuum limit (keeping the volume fixed). Alternative cutoff regularizations of the Hamiltonian are in general inconsistent, the critical point beeing the correct prediction for Luescher's tadpole coefficient which is formally quadratically divergent and which has to become a well defined (negative) number. (orig.)
Left regular bands of groups of left quotients
International Nuclear Information System (INIS)
El-Qallali, A.
1988-10-01
A semigroup S which has a left regular band of groups as a semigroup of left quotients is shown to be the semigroup which is a left regular band of right reversible cancellative semigroups. An alternative characterization is provided by using spinned products. These results are applied to the case where S is a superabundant whose set of idempotents forms a left normal band. (author). 13 refs
Human visual system automatically encodes sequential regularities of discrete events.
Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki
2010-06-01
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential
Estimation of the global regularity of a multifractional Brownian motion
DEFF Research Database (Denmark)
Lebovits, Joachim; Podolskij, Mark
This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....
Regularization of the quantum field theory of charges and monopoles
International Nuclear Information System (INIS)
Panagiotakopoulos, C.
1981-09-01
A gauge invariant regularization procedure for quantum field theories of electric and magnetic charges based on Zwanziger's local formulation is proposed. The bare regularized full Green's functions of gauge invariant operators are shown to be Lorentz invariant. This would have as a consequence the Lorentz invariance of the finite Green's functions that might result after any reasonable subtraction if such a subtraction can be found. (author)
Borderline personality disorder and regularly drinking alcohol before sex.
Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S
2017-07-01
Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol
The Impact of Computerization on Regular Employment (Japanese)
SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro
2004-01-01
This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...
Analytic regularization of the Yukawa model at finite temperature
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.
1996-07-01
It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs
International Nuclear Information System (INIS)
Zhong Jian; Huang Si-Xun; Fei Jian-Fang; Du Hua-Dong; Zhang Liang
2011-01-01
According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called GMF+Rain). The GMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
International Nuclear Information System (INIS)
Schmidt, T.W.
1979-01-01
A position measuring detector was fabricated for the Heavy Ion Beam Probe. The 11 cm by 50 cm detector was a combination of 15 detector wires in one direction and 63 copper bars - .635 cm by 10 cm to measure along an orthogonal axis by means of a current divider circuit. High transmission tungsten meshes provide entrance windows and suppress secondary electrons. The detector dimensions were chosen to resolve the beam position to within one beam diameter
International Nuclear Information System (INIS)
Lipkin, H.J.
1976-01-01
Hyperon beams can provide new interesting information about hadron structure and their strong, electromagnetic and weak interactions. The dependence of hadron interactions on strangeness and baryon number is not understood, and data from hyperon beams can provide new clues to paradoxes which arise in the interpretation of data from conventional beams. Examples of interesting data are total and differential cross sections, magnetic moments and values of Gsub(A)/Gsub(V) for weak semileptonic decays. (author)
Kalvas, T.
2013-12-16
This chapter gives an introduction to low-energy beam transport systems, and discusses the typically used magnetostatic elements (solenoid, dipoles and quadrupoles) and electrostatic elements (einzel lens, dipoles and quadrupoles). The ion beam emittance, beam space-charge effects and the physics of ion source extraction are introduced. Typical computer codes for analysing and designing ion optical systems are mentioned, and the trajectory tracking method most often used for extraction simulations is described in more detail.
International Nuclear Information System (INIS)
Turner, N.L.
1982-01-01
A particle beam accelerator is described which has several electrodes that are selectively short circuited together synchronously with changes in the magnitude of a DC voltage applied to the accelerator. By this method a substantially constant voltage gradient is maintained along the length of the unshortened electrodes despite variations in the energy applied to the beam by the accelerator. The invention has particular application to accelerating ion beams that are implanted into semiconductor wafers. (U.K.)
International Nuclear Information System (INIS)
Fink, J.H.
1979-01-01
A neutral beam generated by passing accelerated ions through a walled cell containing a low energy neutral gas, such that charge exchange partially neutralizes the high energy beam, is monitored by detecting the current flowing through the cell wall produced by low energy ions which drift to the wall after the charge exchange. By segmenting the wall into radial and longitudinal segments various beam conditions are identified. (U.K.)
Chilled beam application guidebook
Butler, David; Gräslund, Jonas; Hogeling, Jaap; Lund Kristiansen, Erik; Reinikanen, Mika; Svensson, Gunnar
2007-01-01
Chilled beam systems are primarily used for cooling and ventilation in spaces, which appreciate good indoor environmental quality and individual space control. Active chilled beams are connected to the ventilation ductwork, high temperature cold water, and when desired, low temperature hot water system. Primary air supply induces room air to be recirculated through the heat exchanger of the chilled beam. In order to cool or heat the room either cold or warm water is cycled through the heat exchanger.
International Nuclear Information System (INIS)
Post, R.F.; Vann, C.S.
1996-10-01
Back-reflections from a target, lenses, etc. can gain energy passing backwards through a laser just like the main beam gains energy passing forwards. Unless something blocks these back-reflections early in their path, they can seriously damage the laser. A Mechanical Beam Isolator is a device that blocks back-reflections early, relatively inexpensively, and without introducing aberrations to the laser beam
Transverse Feedback for Electron-Cooled DC-Beam at COSY
International Nuclear Information System (INIS)
Kamerdzhiev, V.; Dietrich, J.
2004-01-01
At the cooler synchrotron COSY, high beam quality is achieved by means of beam cooling. In the case of intense electron-cooled beams, fast particle losses due to transverse coherent beam oscillations are regularly observed. To damp the instabilities a transverse feedback system was installed and successfully commissioned. Commissioning of the feedback system resulted in a significant increase of the e-cooled beam intensity by single injection and when cooling and stacking of repeated injections is applied. External experiments profit from the small diameter beams and the reduced halo. A transverse damping system utilizing a pick-up, signal processing electronics, power amplifiers, and a stripline deflector is introduced. Beam current and Schottky spectra measurements with the vertical feedback system turned on and off are presented
The relationship between lifestyle regularity and subjective sleep quality
Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.
2003-01-01
In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.
Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions
International Nuclear Information System (INIS)
Lin, Hongxia; Du, Lili
2013-01-01
In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing
2014-06-02
Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.
Instability of compensated beam-beam collisions
International Nuclear Information System (INIS)
Rosenzweig, J.B.; Autin, B.; Chen, Pisin.
1989-01-01
The beam-beam disruption phenomena in linear colliders are increasingly seen as a source of serious problems for these machines. A plasma compensation scheme, in which the motion of the plasma electrons in the presence of the colliding beams provides neutralizing charge and current densities, has been proposed and studied. But natural alternative to this scheme is to consider the overlapping of nearly identical high energy e + and e/sup /minus// bunches, and the collision of two such pairs - in other words, collision of two opposing relativistic positronium plasmas. It should be noticed that while the luminosity for all collisions is increased by a factor of four in this scheme, the event rate for e + e/sup /minus// collisions is only increased by a factor of two. The other factor of two corresponds to the addition of e + e + and e/sup /minus//e/sup /minus// collisions to the interaction point. This beam compensation scheme, which has been examined through computer simulation by Balakin and Solyak in the Soviet Union, promises full neutralization of beam charges and currents. These numerical investigations have shown that plasma instabilities exist in this nominally neutral system. Although the implementation of this idea seems technically daunting, the potential benefits (beamstrahlung and disruption suppression, relaxation of final focus system constraints) are such that we should consider the physics of these collisions further. In the remainder of this paper, we theoretically analyze the issues of stability and bunch parameter tolerances in this scheme. 11 refs
Electromagnetic field of a circular beam of relativistic particles
International Nuclear Information System (INIS)
Vybiral, B.
1978-01-01
The generalized Coulomb law and the generalized Biot-Savart-Laplace law are derived for an element of a beam of charged relativistic particles moving generally irregularly. These laws are utilized for the description of an electromagnetic field of a circular beam of relativistic regularly moving particles. It is shown that in the points on the axis of the beam the intensity of the electric field is given by an expression precisely corresponding to the classical Coulomb law for charges at rest and the induction of the magnetic field corresponds to the classical Biot-Savart-Laplace law for conductive currents. From the numerical solution it follows that in the points outside the axis the induction of the magnetic field rises with the velocity of the particles. For a velocity nearing that of light in vacuum it assumes a definite value (with the exception of the points lying on the beam). (author)
International Nuclear Information System (INIS)
1975-01-01
A dual-beam cathode-ray tube having a pair of electron guns and associated deflection means disposed side-by-side on each side of a central axis is described. The electron guns are parallel and the deflection means includes beam centering plates and angled horizontal deflection plates to direct the electron beams toward the central axis, precluding the need for a large-diameter tube neck in which the entire gun structures are angled. Bowing control plates are disposed adjacent to the beam centering plates to minimize trace bowing, and an intergun shield is disposed between the horizontal deflection plates to control and correct display pattern geometry distortion
Beam Instrumentation and Diagnostics
Strehl, Peter
2006-01-01
This treatise covers all aspects of the design and the daily operations of a beam diagnostic system for a large particle accelerator. A very interdisciplinary field, it involves contributions from physicists, electrical and mechanical engineers and computer experts alike so as to satisfy the ever-increasing demands for beam parameter variability for a vast range of operation modi and particles. The author draws upon 40 years of research and work, most of them spent as the head of the beam diagnostics group at GSI. He has illustrated the more theoretical aspects with many real-life examples that will provide beam instrumentation designers with ideas and tools for their work.
International Nuclear Information System (INIS)
Schwartz, M.M.
1974-01-01
Electron-beam equipment is considered along with fixed and mobile electron-beam guns, questions of weld environment, medium and nonvacuum welding, weld-joint designs, tooling, the economics of electron-beam job shops, aspects of safety, quality assurance, and repair. The application of the process in the case of individual materials is discussed, giving attention to aluminum, beryllium, copper, niobium, magnesium, molybdenum, tantalum, titanium, metal alloys, superalloys, and various types of steel. Mechanical-property test results are examined along with the areas of application of electron-beam welding
International Nuclear Information System (INIS)
Anon.
1979-01-01
The structure of the beam injection program for the Doublet-3 device is discussed. The design considerations for the beam line and design parameters for the Doublet-3 ion souce are given. Major components of the neutral beam injector system are discussed in detail. These include the neutralizer, magnetic shielding, reflecting magnets, vacuum system, calorimeter and beam dumps, and drift duct. The planned location of the two-injector system for Doublet-3 is illustrated and site preparation is considered. The status of beamline units 1 and 2 and the future program schedule are discussed
International Nuclear Information System (INIS)
MacQuigg, D.R.; Speck, D.R.
1976-01-01
Performance of laser fusion targets depends critically on the characteristics of the incident beam. The spatial distribution and temporal behavior of the light incident on the target varies significantly with power, with choice of beam spatial profile and with location of spatial filters. On each ARGUS shot we photograph planes in the incident beams which are equivalent to the target plane. Array cameras record the time integrated energy distributions and streak cameras record the temporal behavior. Computer reduction of the photographic data provides detailed spatial energy distributions, and instantaneous power on target vs. time. Target performance correlates with the observed beam characteristics
International Nuclear Information System (INIS)
Reiser, M.
1982-01-01
An intense relativistic electron beam cannot propagate in a metal drift tube when the current exceeds the space charge limit. Very high charge density and electric field gradients (10 2 to 10 3 MV/m) develop at the beam front and the electrons are reflected. When a neutral gas or a plasma is present, collective acceleration of positive ions occur, and the resulting charge neutralization enables the beam to propagate. Experimental results, theoretical understanding, and schemes to achieve high ion energies by external control of the beam front velocity will be reviewed
International Nuclear Information System (INIS)
Tribouillard, C.
1997-01-01
In the design phase of GANIL which started in 1977, one of the priorities of the project management was equipping the beamlines with a fast and efficient system for visualizing the beam position, thus making possible adjustment of the beam transport lines optics and facilitating beam control. The implantation of some thirty detectors was foreseen in the initial design. The assembly of installed detectors (around 190) proves the advantages of these detectors for displaying all the beams extracted from GANIL: transfer and transport lines, beam extracted from SISSI, very high intensity beam, secondary ion beams from the production target of the LISE and SPEG spectrometers, different SPIRAL project lines. All of these detectors are based on standard characteristics: - standard flange diameter (DN 160) with a standard booster for all the sensors; - identical analog electronics for all the detectors, with networking; - unique display system. The new micro-channel plate non-interceptive detectors (beam profile and ion packet lengths) make possible in-line control of the beam quality and accelerator stability. (author)
Sea disposal of radioactive wastes: The London Convention 1972
International Nuclear Information System (INIS)
Sjoeblom, K.L.; Linsley, G.
1994-01-01
For many years the oceans were used for the disposal of industrial wastes, including radioactive wastes. In the 1970s, the practice became subject to an international convention which had the aim of regularizing procedures and preventing activities which could lead to marine pollution. This article traces the history of radioactive waste disposal at sea from the time when it first came within the view of international organizations up to the present. 2 figs, 2 tabs
Simulation of Beam-Beam Background at CLIC
Sailer, Andre
2010-01-01
The dense beams used at CLIC to achieve a high luminosity will cause a large amount of background particles through beam-beam interactions. Generator level studies with GuineaPig and full detector simulation studies with an ILD based CLIC detector have been performed to evaluate the amount of beam-beam background hitting the vertex detector.
Simulation of Beam-Beam Background at CLIC
Sailer, A
2010-01-01
The dense beams used at CLIC to achieve a high luminosity will cause a large amount of background particles through beam-beam interactions. Generator level studies with GUINEAPIG and full detector simulation studies with an ILD based CLIC detector have been performed to evaluate the amount of beam-beam back- ground hitting the vertex detector.
Beam feasibility study of a collimator with in-jaw beam position monitors
Wollmann, Daniel; Nosych, Andriy A.; Valentino, Gianluca; Aberle, Oliver; Aßmann, Ralph W.; Bertarelli, Alessandro; Boccard, Christian; Bruce, Roderik; Burkart, Florian; Calvo, Eva; Cauchi, Marija; Dallocchio, Alessandro; Deboy, Daniel; Gasior, Marek; Jones, Rhodri; Kain, Verena; Lari, Luisella; Redaelli, Stefano; Rossi, Adriana
2014-12-01
At present, the beam-based alignment of the LHC collimators is performed by touching the beam halo with both jaws of each collimator. This method requires dedicated fills at low intensities that are done infrequently and makes this procedure time consuming. This limits the operational flexibility, in particular in the case of changes of optics and orbit configuration in the experimental regions. The performance of the LHC collimation system relies on the machine reproducibility and regular loss maps to validate the settings of the collimator jaws. To overcome these limitations and to allow a continuous monitoring of the beam position at the collimators, a design with jaw-integrated Beam Position Monitors (BPMs) was proposed and successfully tested with a prototype (mock-up) collimator in the CERN SPS. Extensive beam experiments allowed to determine the achievable accuracy of the jaw alignment for single and multi-turn operation. In this paper, the results of these experiments are discussed. The non-linear response of the BPMs is compared to the predictions from electromagnetic simulations. Finally, the measured alignment accuracy is compared to the one achieved with the present collimators in the LHC.
On the regularities of gamma-ray initiated emission of really-secondary electrons
International Nuclear Information System (INIS)
Grudskij, M.Ya.; Roldugin, N.N.; Smirnov, V.V.
1982-01-01
Emission regularities of the really-secondary electrons from metals are discussed on the basis of experimental data on electron emission characteristics under gamma radiation of incident quanta produced for a wide energy range (Esub(γ)=0.03+-2 MeV) and atomic numbers of target materials (Z=13+-79). Comparison with published experimental and calculated data is performed. It is shown that yield of the really-secondary electrons into vacuum from the target surface bombarded with a normally incident collimated beam of gamma radiation calculating on energy unit absorbed in the yield zone of the really-secondary electrons is determined only with the target material emittivity and can be calculated if spatial-energy distributions and the number of secondary fast electrons emitted out of the target are known
Observations of the beam-beam interaction
International Nuclear Information System (INIS)
Seeman, J.T.
1985-11-01
The observed complexity of the beam-beam interaction is the subject of this paper. The varied observations obtained from many storage rings happen to be sufficiently similar that a prescription can be formulated to describe the behavior of the luminosity as a function of beam current including the peak value. This prescription can be used to interpret various methods for improving the luminosity. Discussion of these improvement methods is accompanied with examples from actual practice. The consequences of reducing the vertical betatron function (one of the most used techniques) to near the value of the bunch length are reviewed. Finally, areas needing further experimental and calculational studies are pointed out as they are uncovered
Numerical Modelling of Extreme Natural Hazards in the Russian Seas
Arkhipkin, Victor; Dobrolyubov, Sergey; Korablina, Anastasia; Myslenkov, Stanislav; Surkova, Galina
2017-04-01
Storm surges and extreme waves are severe natural sea hazards. Due to the almost complete lack of natural observations of these phenomena in the Russian seas (Caspian, Black, Azov, Baltic, White, Barents, Okhotsk, Kara), especially about their formation, development and destruction, they have been studied using numerical simulation. To calculate the parameters of wind waves for the seas listed above, except the Barents Sea, the spectral model SWAN was applied. For the Barents and Kara seas we used WAVEWATCH III model. Formation and development of storm surges were studied using ADCIRC model. The input data for models - bottom topography, wind, atmospheric pressure and ice cover. In modeling of surges in the White and Barents seas tidal level fluctuations were used. They have been calculated from 16 harmonic constant obtained from global atlas tides FES2004. Wind, atmospheric pressure and ice cover was taken from the NCEP/NCAR reanalysis for the period from 1948 to 2010, and NCEP/CFSR reanalysis for the period from 1979 to 2015. In modeling we used both regular and unstructured grid. The wave climate of the Caspian, Black, Azov, Baltic and White seas was obtained. Also the extreme wave height possible once in 100 years has been calculated. The statistics of storm surges for the White, Barents and Azov Seas were evaluated. The contribution of wind and atmospheric pressure in the formation of surges was estimated. The technique of climatic forecast frequency of storm synoptic situations was developed and applied for every sea. The research was carried out with financial support of the RFBR (grant 16-08-00829).
Accelerators for E-beam and X-ray processing
Energy Technology Data Exchange (ETDEWEB)
Auslender, V.L. E-mail: auslen@inp.nsk.su; Bryazgin, A.A.; Faktorovich, B.L.; Gorbunov, V.A.; Kokin, E.N.; Korobeinikov, M.V.; Krainov, G.S.; Lukin, A.N.; Maximov, S.A.; Nekhaev, V.E.; Panfilov, A.D.; Radchenko, V.N.; Tkachenko, V.O.; Tuvik, A.A.; Voronin, L.A
2002-03-01
During last years the demand for pasteurization and desinsection of various food products (meat, chicken, sea products, vegetables, fruits, etc.) had increased. The treatment of these products in industrial scale requires the usage of powerful electron accelerators with energy 5-10 MeV and beam power at least 50 kW or more. The report describes the ILU accelerators with energy range up to 10 MeV and beam power up to 150 kW.The different irradiation schemes in electron beam and X-ray modes for various products are described. The design of the X-ray converter and 90 deg. beam bending system are also given.
Regular pattern formation on surface of aromatic polymers and its cytocompatibility
Energy Technology Data Exchange (ETDEWEB)
Michaljaničová, I. [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic); Slepička, P., E-mail: petr.slepicka@vscht.cz [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic); Rimpelová, S. [Department of Biochemistry and Microbiology, University of Chemistry and Technology Prague, Technicka 5, Prague 6, 166 28 (Czech Republic); Slepičková Kasálková, N.; Švorčík, V. [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic)
2016-05-01
Highlights: • The nanopatterning technique of PES, PEI and PEEK with KrF laser was described. • Both nanodots and ripples on aromatic polymers were successfully constructed. • Dimensions of nanostructures can be precisely controlled. • Surface parameters dependent on angle of laser beam incidence were characterized. • U-2 OS cell adaptation and growth on nanopatterned surface was described. - Abstract: In this work, we describe ripple and dot nanopatterning of three different aromatic polymer substrates by KrF excimer laser treatment. The conditions for regular structures were established by laser fluence and number of pulses. Subsequently, the influence of the angle of incidence of a laser beam was investigated. We have chosen polyethersulfone (PES), polyetherimide (PEI) and polyetheretherketone (PEEK) as substrates for modification since they are thermally, chemically and mechanically resistant aromatic polymers with high absorption coefficients at excimer laser wavelength. As a tool of wettability investigation, we used contact angle measurement and for determination of the absorption edge, UV–vis spectroscopy was used. Material surface chemistry was analyzed using FTIR and the changes caused by modification were gained as differential spectra by subtraction of the spectra of non-modified material. Surface morphology was investigated by atomic force microscopy, also the roughness and surface area of modified samples were studied. The scans showed the formation of regular periodic structures, ripples and dots, after treatment by 8 and 16 mJ cm{sup −2} and 6000 pulses. Further, initial in vitro cytocompatibility tests were performed using U-2 OS cell line growing on PES samples subjected to scanning electron microscopy analysis. The structure formation mapping contributes strongly to development of new applications using nanostructured polymers, e.g. in tissue engineering or in combination with metallization in selected electronics and metamaterials
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Energy Technology Data Exchange (ETDEWEB)
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
The neural substrates of impaired finger tapping regularity after stroke.
Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude
2010-03-01
Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Rimbault, C.; Le Meur, G.; Blampuy, F.; Bambade, P.; Schulte, D.
2009-12-01
Depolarization is a new feature in the beam-beam simulation tool GUINEA-PIG++ (GP++). The results of this simulation are studied and compared with another beam-beam simulation tool, CAIN, considering different beam parameters for the International Linear Collider (ILC) with a centre-of-mass energy of 500 GeV.
Caspian sea: petroleum challenges
International Nuclear Information System (INIS)
2005-01-01
The Caspian sea is one of the world areas the most promising in terms of investments and petroleum development. This study presents the petroleum challenges generated by this hydrocarbons reserve. The first part discusses the juridical status (sea or lake), the petroleum and the gas reserves, the ecosystem and the today environment (fishing and caviar), the geostrategic situation and the transport of gas and oil. It provides also a chronology from 1729 to 2005, a selection of Internet sites, books and reports on the subject and identity sheets of the countries around the Caspian sea. (A.L.B.)
Energy Technology Data Exchange (ETDEWEB)
NONE
1971-07-01
Water covers a little more than two-thirds of the earth's surface. What is thrown into the sea from a ship may be washed up on a shore thousands of miles away; wastes discharged into the seas or into rivers flowing into them can affect marine life and possibly also the health of man. The study, prevention and control of pollution of the seas and oceans by radionuclides introduced as by-products of man's use of nuclear energy is thus of global interest. (author)
Simulation study of the beam-beam interaction at SPEAR
International Nuclear Information System (INIS)
Tennyson, J.
1980-01-01
A two dimensional simulation study of the beam-beam interaction at SPEAR indicates that quantum fluctuations affecting the horizontal betatron oscillation play a critical role in the vertical beam blowup
Nonlinear optical beam manipulation, beam combining, and atmospheric propagation
International Nuclear Information System (INIS)
Fischer, R.A.
1988-01-01
These proceedings collect papers on optics: Topics include: diffraction properties of laser speckle, coherent beam combination by plasma modes, nonlinear responses, deformable mirrors, imaging radiometers, electron beam propagation in inhomogeneous media, and stability of laser beams in a structured environment
Experimental study of the molecular beam destruction by beam-beam and beam-background scattering
International Nuclear Information System (INIS)
Bossel, U.; Dettleff, G.
1974-01-01
The extraction of flow properties related to the molecular motion normal to stream lines of an expanding gas jet from observed intensity profiles of supersonic beams is critically assessed. The perturbation of the profile curves by various effects is studied for a helium beam. Exponential laws appear to describe scattering effects to a satisfactory degree
Electron beam diagnostics study
International Nuclear Information System (INIS)
Garganne, P.
1989-08-01
This paper summarizes the results of a study on beam diagnostics, using carbon wire scanners and optical transition radiation (DTR) monitors. The main consideration consists in the material selection, taking their thermal properties and their effect on the beam into account [fr
International Nuclear Information System (INIS)
Stokes, R.H.; Crandall, K.R.; Stovall, J.E.; Swenson, D.A.
1979-01-01
A method has been developed to analyze the beam dynamics of the radiofrequency quadrupole accelerating structure. Calculations show that this structure can accept a dc beam at low velocity, bunch it with high capture efficiency, and accelerate it to a velocity suitable for injection into a drift tube linac
DEFF Research Database (Denmark)
Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup
2012-01-01
BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...
International Nuclear Information System (INIS)
Corbett, J.
1996-01-01
The SPEAR storage ring began routine synchrotron radiation operation with a dedicated injector in 1990. Since then, a program to improve beam stability has steadily progressed. This paper, based on a seminar given at a workshop on storage ring optimization (1995 SRI conference) reviews the beam stability program for SPEAR. copyright 1996 American Institute of Physics
CERN PhotoLab
1973-01-01
Inner structure of an ionization beam scanner, a rather intricate piece of apparatus which permits one to measure the density distribution of the proton beam passing through it. On the outside of the tank wall there is the coil for the longitudinal magnetic field, on the inside, one can see the arrangement of electrodes creating a highly homogeneous transverse electric field.
Lembessis, Vasileios E.
2017-07-01
We study the generation of atom vortex beams in the case where a Bose-Einstein condensate, released from a trap and moving in free space, is diffracted from a properly tailored light mask with a spiral transverse profile. We show how such a diffraction scheme could lead to the production of an atomic Ferris wheel beam.
International Nuclear Information System (INIS)
Dicello, J.F.
1975-01-01
Negative pion beams are probably the most esoteric and most complicated type of radiation which has been suggested for use in clinical radiotherapy. Because of the limited availability of pion beams in the past, even to nuclear physicists, there exist relatively fewer basic data for this modality. Pion dosimetry is discussed
International Nuclear Information System (INIS)
Gabbay, M.
1972-01-01
The bead characteristics and the possible mechanisms of the electron beam penetration are presented. The different welding techniques are exposed and the main parts of an electron beam welding equipment are described. Some applications to nuclear, spatial and other industries are cited [fr
MODULATED PLASMA ELECTRON BEAMS
Energy Technology Data Exchange (ETDEWEB)
Stauffer, L. H.
1963-08-15
Techniques have been developed for producing electron beams of two amperes or more, from a plasma within a hollow cathode. Electron beam energies of 20 kilovolts are readily obtained and power densities of the order of 10,000 kilowatts per square inch can be obtained with the aid of auxiliary electromagnetic focusing. An inert gas atmosphere of a few microns pressure is used to initiate and maintain the beam. Beam intensity increases with both gas pressure and cathode potential but may be controlled by varying the potential of an internal electrode. Under constant pressure and cathode potential the beam intensity may be varied over a wide range by adjusting the potential of the internal control electrode. The effects of cathode design on the volt-ampere characteristics of the beam and the design of control electrodes are described. Also, performance data in both helium and argon are given. A tentative theory of the origin of electrons and of beam formation is proposed. Applications to vacuum metallurgy and to electron beam welding are described and illustrated. (auth)
Electron beam simulation applicators
International Nuclear Information System (INIS)
Purdy, J.A.
1983-01-01
A system for simulating electron beam treatment portals using low-temperature melting point alloy is described. Special frames having the same physical dimensions as the electron beam applicators used on the Varian Clinac 20 linear accelerator were designed and constructed
2008-01-01
The Ion Beam Propulsion Study was a joint high-level study between the Applied Physics Laboratory operated by NASA and ASRC Aerospace at Kennedy Space Center, Florida, and Berkeley Scientific, Berkeley, California. The results were promising and suggested that work should continue if future funding becomes available. The application of ion thrusters for spacecraft propulsion is limited to quite modest ion sources with similarly modest ion beam parameters because of the mass penalty associated with the ion source and its power supply system. Also, the ion source technology has not been able to provide very high-power ion beams. Small ion beam propulsion systems were used with considerable success. Ion propulsion systems brought into practice use an onboard ion source to form an energetic ion beam, typically Xe+ ions, as the propellant. Such systems were used for steering and correction of telecommunication satellites and as the main thruster for the Deep Space 1 demonstration mission. In recent years, "giant" ion sources were developed for the controlled-fusion research effort worldwide, with beam parameters many orders of magnitude greater than the tiny ones of conventional space thruster application. The advent of such huge ion beam sources and the need for advanced propulsion systems for exploration of the solar system suggest a fresh look at ion beam propulsion, now with the giant fusion sources in mind.
International Nuclear Information System (INIS)
Nadji, A.
1989-07-01
The VIVITRON is a new 35 MV particle accelerator which presents a great number of innovations. One of the major problem is the beam transport in this electrostatic machine of 50 m length for ions with masses between 1 and 200. Our work consisted in the study of various experimental and theoretical aspects of the beam transport in Tandem accelerators from the ion source to the analysing magnet. Calculations of the beam optics were performed with a Strasbourg version of the computer code Transport. They allowed us to optimize the beam transport parameters of the VIVITRON elements. Special attention has been focused on the design of the charge state selector to be installed in the terminal of the new machine. Beam transmission measurements were carried out in the Strasbourg MP 10 Tandem accelerator for ions beams of masses between 1 and 127 and for terminal voltages from 9 to 15 MV. Partial and total transmissions were obtained and explanations of the beam losses were proposed in terms of the vacuum pressure and/or the optics of the beam accelerator system. The results have been extrapolated to the VIVITRON for which the best working conditions are now clearly defined [fr
International Nuclear Information System (INIS)
Younger, F.C.
1986-08-01
A design and fabrication effort for a beam director is documented. The conceptual design provides for the beam to pass first through a bending and focusing system (or ''achromat''), through a second achromat, through an air-to-vacuum interface (the ''beam window''), and finally through the vernier steering system. Following an initial concept study for a beam director, a prototype permanent magnet 30 0 beam-bending achromat and prototype vernier steering magnet were designed and built. In volume II, copies are included of the funding instruments, requests for quotations, purchase orders, a complete set of as-built drawings, magnetic measurement reports, the concept design report, and the final report on the design and fabrication project
CSIR Research Space (South Africa)
Roux, FS
2009-01-01
Full Text Available , t0)} = P(du, dv) {FR{g(u, v, t0)}} Replacement: u→ du = t− t0 i2 ∂ ∂u′ v → dv = t− t0 i2 ∂ ∂v′ CSIR National Laser Centre – p.13/30 Differentiation i.s.o integration Evaluate the integral over the Gaussian beam (once and for all). Then, instead... . Gaussian beams with vortex dipoles CSIR National Laser Centre – p.2/30 Gaussian beam notation Gaussian beam in normalised coordinates: g(u, v, t) = exp ( −u 2 + v2 1− it ) u = xω0 v = yω0 t = zρ ρ = piω20 λ ω0 — 1/e2 beam waist radius; ρ— Rayleigh range ω ω...
International Nuclear Information System (INIS)
McKinney, C.R.
1980-01-01
An ion beam analyzer is specified, having an ion source for generating ions of a sample to be analyzed, means for extracting the sample ions, means for focusing the sample ions into a beam, separation means positioned along the ion beam for selectively deflecting species of ions, and means for detecting the selected species of ions. According to the specification, the analyzer further comprises (a) means for disabling at least a portion of the separation means, such that the ion beam from the source remains undeflected; (b) means located along the path of the undeflected ion beam for sensing the sample ions; and (c) enabling means responsive to the sensing means for automatically re-enabling the separation means when the sample ions reach a predetermined intensity level. (author)
International Nuclear Information System (INIS)
Lee, Y.T.
1976-01-01
Research activities with crossed molecular beams at Lawrence Berkeley Laboratory during 1976 are described. Topics covered include: scattering of Ar*, Kr*, with Xe; metastable rare gas interactions, He* + H 2 ; an atomic and molecular halogen beam source; a crossed molecular beam study of the Cl + Br 2 → BrCl + Br reaction; O( 3 P) reaction dynamics, development of the high pressure plasma beam source; energy randomization in the Cl + C 2 H 3 Br → Br + C 2 H 3 Cl reaction; high resolution photoionization studies of NO and ICl; photoionization of (H 2 O)/sub n/ and (NH 3 ) 2 ; photoionization mass spectroscopy of NH 3 + and O 3 + ; photo fragmentation of bromine; and construction of chemiluminescence-laser fluorescence crossed molecular beam machine
Reducing errors in the GRACE gravity solutions using regularization
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4
Survey of beam-beam limitations
International Nuclear Information System (INIS)
Courant, E.; Cornacchia, M.; Donald, M.M.R.; Evans, L.R.; Tazzari, S.; Wilson, E.J.N.
1979-01-01
The effect of beam-beam interaction is known to limit the luminosity of electron-positron storage rings and will, no doubt, limit the proton-antiproton collision scheme for the SPS. While theorists are struggling to explain this phenomenon it is more instructive to list their failures than their rather limited successes, in the hope that experiments may emerge which will direct their endeavors. The search for a description of a nonlinear system as it approaches the limit in which ordered motion breaks down, is the nub of the problem. It has engaged many fine mathematical intellects for decades and will no doubt continue to do so long after ISABELLE, the p antip and LEP are past achievements. Empirical scaling laws are emerging which relate electron machines to each other but their extrapolation to proton machines remain a very speculative exercise. Experimental data on proton limits is confined to one machine, the ISR, which does not normally suffer the beam-beam effect and where it must be artificially induced or simulated. This machine is also very different in important ways from the p antip collider. The gloomy picture which has emerged recently is that the fixed limits which were conventionally assumed for proton and electron machines can only be said to be valid for the machines which engendered them - the best guess that could be made at the time. They are very difficult to extrapolate to other sets of parameters