Algorithmic algebraic geometry and flux vacua
International Nuclear Information System (INIS)
Gray, James; He Yanghui; Lukas, Andre
2006-01-01
We develop a new and efficient method to systematically analyse four dimensional effective supergravities which descend from flux compactifications. The issue of finding vacua of such systems, both supersymmetric and non-supersymmetric, is mapped into a problem in computational algebraic geometry. Using recent developments in computer algebra, the problem can then be rapidly dealt with in a completely algorithmic fashion. Two main results are (1) a procedure for calculating constraints which the flux parameters must satisfy in these models if any given type of vacuum is to exist; (2) a stepwise process for finding all of the isolated vacua of such systems and their physical properties. We illustrate our discussion with several concrete examples, some of which have eluded conventional methods so far
An Algorithm for Induction Motor Stator Flux Estimation
Directory of Open Access Journals (Sweden)
STOJIC, D. M.
2012-08-01
Full Text Available A new method for the induction motor stator flux estimation used in the sensorless IM drive applications is presented in this paper. Proposed algorithm advantageously solves problems associated with the pure integration, commonly used for the stator flux estimation. An observer-based structure is proposed based on the stator flux vector stationary state, in order to eliminate the undesired DC offset component present in the integrator based stator flux estimates. By using a set of simulation runs it is shown that the proposed algorithm enables the DC-offset free stator flux estimated for both low and high stator frequency induction motor operation.
Fully multidimensional flux-corrected transport algorithms for fluids
International Nuclear Information System (INIS)
Zalesak, S.T.
1979-01-01
The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Flux estimation algorithms for electric drives: a comparative study
Koteich , Mohamad
2016-01-01
International audience; This paper reviews the stator flux estimation algorithms applied to the alternating current motor drives. The so-called voltage model estimation, which consists of integrating the back-electromotive force signal, is addressed. However, in practice , the pure integration is prone to drift problems due to noises, measurement error, stator resistance uncertainty and unknown initial conditions. This limitation becomes more restrictive at low speed operation. Several soluti...
Flux mapping algorithm (FMA) for 700 MWe PHWR
International Nuclear Information System (INIS)
Sonavani, Manoj; Ingle, V.J.; Singhvi, P.K.; Raj, Manish; Fernando, M.P.S.; Kumar, A.N.
2012-01-01
For large reactor like 700 MWe PHWR effective spatial control is essential and is provided by RRS. For spatial control purpose reactor core is divided into 14 power zones. Corresponding to each zone is a light water zonal compartment. The 14 ZCCs are located in two radial planes, each containing 7 ZCCs. For each zone, power measurement is carried out using inconel (3 pitch long) self powered neutron detector (SPND) at appropriate location close to the respective ZCC. Since the zone power as obtained by the healthy zone control detector (ZCD) reading belonging to a particular zone may not correspond to its actual power because the detector per zone, measure only average fluxes but the zone extends over a large core region. Therefore accurate estimation of zone power calibration factors is required to estimate the zone powers and also to provide effective spatial power control to avoid the xenon induced spatial power oscillations in large PHWRs like 700 and 540 MWe Reactors. This accurate calculation of zone power is carried out by FMS which uses λ modes in its algorithm. Flux at any point inside the reactor can be represented in terms of the linear combination of these modes. Coefficients used in the expansion are called combining coefficient. If the readings of the detectors are known, then combining coefficients can be estimated by simple matrix operations. Once these combining coefficients are known, flux at any point inside the reactor can be found. (author)
An Improved Seeding Algorithm of Magnetic Flux Lines Based on Data in 3D Space
Directory of Open Access Journals (Sweden)
Jia Zhong
2015-05-01
Full Text Available This paper will propose an approach to increase the accuracy and efficiency of seeding algorithms of magnetic flux lines in magnetic field visualization. To obtain accurate and reliable visualization results, the density of the magnetic flux lines should map the magnetic induction intensity, and seed points should determine the density of the magnetic flux lines. However, the traditional seeding algorithm, which is a statistical algorithm based on data, will produce errors when computing magnetic flux through subdivision of the plane. To achieve higher accuracy, more subdivisions should be made, which will reduce efficiency. This paper analyzes the errors made when the traditional seeding algorithm is used and gives an improved algorithm. It then validates the accuracy and efficiency of the improved algorithm by comparing the results of the two algorithms with results from the equivalent magnetic flux algorithm.
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
Inviscid flux-splitting algorithms for real gases with non-equilibrium chemistry
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1990-01-01
Formulations of inviscid flux splitting algorithms for chemical nonequilibrium gases are presented. A chemical system for air dissociation and recombination is described. Numerical results for one-dimensional shock tube and nozzle flows of air in chemical nonequilibrium are examined.
Flux-split algorithms for flows with non-equilibrium chemistry and vibrational relaxation
Grossman, B.; Cinnella, P.
1990-01-01
The present consideration of numerical computation methods for gas flows with nonequilibrium chemistry thermodynamics gives attention to an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Flux-splitting procedures are developed for the fully-coupled inviscid equations encompassing fluid dynamics and both chemical and internal energy-relaxation processes. A fully coupled and implicit large-block structure is presented which embodies novel forms of flux-vector split and flux-difference split algorithms valid for nonequilibrium flow; illustrative high-temperature shock tube and nozzle flow examples are given.
Optimal Design of the Transverse Flux Machine Using a Fitted Genetic Algorithm with Real Parameters
DEFF Research Database (Denmark)
Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika
2012-01-01
This paper applies a fitted genetic algorithm (GA) to the optimal design of transverse flux machine (TFM). The main goal is to provide a tool for the optimal design of TFM that is an easy to use. The GA optimizes the analytic basic design of two TFM topologies: the C-core and the U-core. First...
Improved semianalytic algorithms for finding the flux from a cylindrical source
International Nuclear Information System (INIS)
Wallace, O.J.
1992-01-01
Hand-calculation methods involving semianalytic approximations of exact flux formulas continue to be useful in shielding calculations because they enable shield design personnel to make quick estimates of dose rates, check calculations made be more exact and time-consuming methods, and rapidly determine the scope of problems. They are also a valuable teaching tool. The most useful approximate flux formula is that for the flux at a lateral detector point from a cylindrical source with an intervening slab shield. Such an approximate formula is given by Rockwell. An improved formula for this case is given by Ono and Tsuro. Shure and Wallace also give this formula together with function tables and a detailed survey of its accuracy. The second section of this paper provides an algorithm for significantly improving the accuracy of the formula of Ono and Tsuro. The flux at a detector point outside the radial and axial extensions of a cylindrical source, again with an intervening slab shield, is another case of interest, but nowhere in the literature is this arrangement of source, shield, and detector point treated. In the third section of this paper, an algorithm for this case is given, based on superposition of sources and the algorithm of Section II. 6 refs., 1 fig., 1 tab
Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.
2007-01-01
To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
Directory of Open Access Journals (Sweden)
Yeo Beom Yoon
2014-04-01
Full Text Available Windows are the primary aperture to introduce solar radiation to the interior space of a building. This experiment explores the use of EnergyPlus software for analyzing the illuminance level on the floor of a room with reference to its distance from the window. For this experiment, a double clear glass window has been used. The preliminary modelling in EnergyPlus showed a consistent result with the experimentally monitored data in real time. EnergyPlus has two mainly used daylighting algorithms: DElight method employing radiosity technique and Detailed method employing split-flux technique. Further analysis for illuminance using DElight and Detailed methods showed significant difference in the results. Finally, we compared the algorithms of the two analysis methods in EnergyPlus.
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen
2016-04-01
Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.
Data Driven Estimation of Transpiration from Net Water Fluxes: the TEA Algorithm
Nelson, J. A.; Carvalhais, N.; Cuntz, M.; Delpierre, N.; Knauer, J.; Migliavacca, M.; Ogee, J.; Reichstein, M.; Jung, M.
2017-12-01
The eddy covariance method, while powerful, can only provide a net accounting of ecosystem fluxes. Particularly with water cycle components, efforts to partitioning total evapotranspiration (ET) into the biotic component (transpiration, T) and the abiotic component (here evaporation, E) have seen limited success, with no one method emerging as a standard.Here we demonstrate a novel method that uses ecosystem WUE to predict transpiration in two steps: (1) a filtration step that to isolate the signal of ET for periods where E is minimized and ET is likely dominated by the signal of T; and (2) a step which predicts the WUE using meteorological variables, as well as information derived from the carbon and energy fluxes. To assess the the underlying assumptions, we tested the proposed method on three ecological models, allowing validation where the underlying carbon:water relationships, as well as the transpiration estimates, are know.The partitioning method shows high correlation (R²>0.8) between Tmodel/ET and TTEA/ET across timescales from half-hourly to annually, as well as capturing spatial variability across sites. Apart from predictive performance, we explore the sensitivities of the method to the underlying assumptions, such as the effects of residual evaporation in the training dataset. Furthermore, we show initial transpiration estimates from the algorithm at global scale, via the FLUXNET dataset.
DEFF Research Database (Denmark)
Ravn, Ib
. FLUX betegner en flyden eller strømmen, dvs. dynamik. Forstår man livet som proces og udvikling i stedet for som ting og mekanik, får man et andet billede af det gode liv end det, som den velkendte vestlige mekanicisme lægger op til. Dynamisk forstået indebærer det gode liv den bedst mulige...... kanalisering af den flux eller energi, der strømmer igennem os og giver sig til kende i vore daglige aktiviteter. Skal vores tanker, handlinger, arbejde, samvær og politiske liv organiseres efter stramme og faste regelsæt, uden slinger i valsen? Eller skal de tværtimod forløbe ganske uhindret af regler og bånd...
Directory of Open Access Journals (Sweden)
Guangjun Wang
2012-01-01
Full Text Available Background. Acupoints (belonging to 12 meridians which have the same names are symmetrically distributed on the body. It has been proved that acupoints have certain biological specificities different from the normal parts of the body. However, there is little evidence that acupoints which have the same name and are located bilaterally and symmetrically have lateralized specificity. Thus, researching the lateralized specificity and the relationship between left-side and right-side acupuncture is of special importance. Methodology and Principal Findings. The mean blood flux (MBF in both Hegu acupoints was measured by Moor full-field laser perfusion imager. With the method of system identification algorithm, the output distribution in different groups was acquired, based on different acupoint stimulation and standard signal input. It is demonstrated that after stimulation of the right Hegu acupoint by needle, the output value of MBF in contralateral Hegu acupoint was strongly amplified, while after acupuncturing the left Hegu acupoint, the output value of MBF in either side Hegu acupoint was amplified moderately. Conclusions and Significance. This paper indicates that the Hegu acupoint has lateralized specificity. After stimulating the ipsilateral Hegu acupoint, symmetry breaking will be produced in contrast to contralateral Hegu acupoint stimulation.
Grossman, B.; Cinella, P.
1988-01-01
A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.
International Nuclear Information System (INIS)
Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu; Yeates, Anthony R.
2010-01-01
The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed α-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithm for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.
Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren
2017-08-01
Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.
International Nuclear Information System (INIS)
Shi Xueming; Wu Hongchun; Sun Shouhua; Liu Shuiqing
2003-01-01
The in-core fuel management optimization model based on the genetic algorithm has been established. An encode/decode technique based on the assemblies position is presented according to the characteristics of HFETR. Different reproduction strategies have been studied. The expert knowledge and the adaptive genetic algorithms are incorporated into the code to get the optimized loading patterns that can be used in HFETR
Directory of Open Access Journals (Sweden)
Xuanyu Wang
2017-12-01
Full Text Available Terrestrial latent heat flux (LE is a key component of the global terrestrial water, energy, and carbon exchanges. Accurate estimation of LE from moderate resolution imaging spectroradiometer (MODIS data remains a major challenge. In this study, we estimated the daily LE for different plant functional types (PFTs across North America using three machine learning algorithms: artificial neural network (ANN; support vector machines (SVM; and, multivariate adaptive regression spline (MARS driven by MODIS and Modern Era Retrospective Analysis for Research and Applications (MERRA meteorology data. These three predictive algorithms, which were trained and validated using observed LE over the period 2000–2007, all proved to be accurate. However, ANN outperformed the other two algorithms for the majority of the tested configurations for most PFTs and was the only method that arrived at 80% precision for LE estimation. We also applied three machine learning algorithms for MODIS data and MERRA meteorology to map the average annual terrestrial LE of North America during 2002–2004 using a spatial resolution of 0.05°, which proved to be useful for estimating the long-term LE over North America.
Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry
2016-01-01
Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
A novel robust and efficient algorithm for charge particle tracking in high background flux
International Nuclear Information System (INIS)
Fanelli, C; Cisbani, E; Dotto, A Del
2015-01-01
The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)
International Nuclear Information System (INIS)
Silva, C.F. da.
1979-09-01
A new formulation of the pseudocontinuous synthesis algorithm is applied to solve the static three dimensional two-group diffusion equations. The new method avoids ambiguities regarding interface conditions, which are inherent to the differential formulation, by resorting to the finite difference version of the differential equations involved. A considerable number of input/output options, possible core configurations and control rod positioning are implemented resulting in a very flexible as well as economical code to compute 3D fluxes, power density and reactivities of PWR reactors with partial inserted control rods. The performance of this new code is checked against the IAEA 3D Benchmark problem and results show that SINT3D yields comparable accuracy with much less computing time and memory required than in conventional 3D finite differerence codes. (Author) [pt
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Energy Technology Data Exchange (ETDEWEB)
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
International Nuclear Information System (INIS)
Vasudevan, M.; Arumugam, R.; Paramasivam, S.
2006-01-01
Field oriented control (FOC) and direct torque control (DTC) are becoming the industrial standards for induction motors torque and flux control. This paper aims to give a contribution for a detailed comparison between these two control techniques, emphasizing their advantages and disadvantages. The performance of these two control schemes is evaluated in terms of torque and flux ripple and their transient response to step variations of the torque command. Moreover, a new torque and flux ripple minimization technique is also proposed to improve the performance of the DTC drive. Based on the experimental results, the analysis has been presented
Directory of Open Access Journals (Sweden)
B. Langford
2017-12-01
Full Text Available Biogenic emission algorithms predict that oak forests account for ∼ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5–8 and 4–5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN, we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500
Langford, Ben; Cash, James; Acton, W. Joe F.; Valach, Amy C.; Hewitt, C. Nicholas; Fares, Silvano; Goded, Ignacio; Gruening, Carsten; House, Emily; Kalogridis, Athina-Cerise; Gros, Valérie; Schafers, Richard; Thomas, Rick; Broadmeadow, Mark; Nemitz, Eiko
2017-12-01
Biogenic emission algorithms predict that oak forests account for ˜ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs) that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5-8 and 4-5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE) model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN), we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500 µg m-2 h-1); Bosco Fontana, Italy (1610
Sedlar, F.; Turpin, E.; Kerkez, B.
2014-12-01
As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.
Directory of Open Access Journals (Sweden)
W. Su
2017-10-01
. When both footprint size and cloud property (cloud fraction and optical depth differences are considered, the uncertainties of monthly gridded NPP CERES SW flux can be up to 20 W m−2 in the Arctic regions where cloud optical depth retrievals from VIIRS differ significantly from MODIS. The global monthly mean instantaneous SW flux from simulated NPP CERES has a high bias of 1.1 W m−2 and the RMS error increases to 5.2 W m−2. LW flux shows less sensitivity to cloud property differences than SW flux, with uncertainties of about 2 W m−2 in the monthly gridded LW flux, and the RMS errors of global monthly mean daytime and nighttime fluxes increase only slightly. These results highlight the importance of consistent cloud retrieval algorithms to maintain the accuracy and stability of the CERES climate data record.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Critical flux determination by flux-stepping
DEFF Research Database (Denmark)
Beier, Søren; Jonsson, Gunnar Eigil
2010-01-01
In membrane filtration related scientific literature, often step-by-step determined critical fluxes are reported. Using a dynamic microfiltration device, it is shown that critical fluxes determined from two different flux-stepping methods are dependent upon operational parameters such as step...... length, step height, and.flux start level. Filtrating 8 kg/m(3) yeast cell suspensions by a vibrating 0.45 x 10(-6) m pore size microfiltration hollow fiber module, critical fluxes from 5.6 x 10(-6) to 1.2 x 10(-5) m/s have been measured using various step lengths from 300 to 1200 seconds. Thus......, such values are more or less useless in itself as critical flux predictors, and constant flux verification experiments have to be conducted to check if the determined critical fluxes call predict sustainable flux regimes. However, it is shown that using the step-by-step predicted critical fluxes as start...
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis
Directory of Open Access Journals (Sweden)
Nielsen Lars K
2009-05-01
Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Validating modeled turbulent heat fluxes across large freshwater surfaces
Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.
2017-12-01
Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.
Splitting of inviscid fluxes for real gases
Liou, Meng-Sing; Van Leer, Bram; Shuen, Jian-Shun
1990-01-01
Flux-vector and flux-difference splittings for the inviscid terms of the compressible flow equations are derived under the assumption of a general equation of state for a real gas in equilibrium. No necessary assumptions, approximations for auxiliary quantities are introduced. The formulas derived include several particular cases known for ideal gases and readily apply to curvilinear coordinates. Applications of the formulas in a TVD algorithm to one-dimensional shock-tube and nozzle problems show their quality and robustness.
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Scaling-up of CO2 fluxes to assess carbon sequestration in rangelands of Central Asia
Bruce K. Wylie; Tagir G. Gilmanov; Douglas A. Johnson; Nicanor Z. Saliendra; Larry L. Tieszen; Ruth Anne F. Doyle; Emilio A. Laca
2006-01-01
Flux towers provide temporal quantification of local carbon dynamics at specific sites. The number and distribution of flux towers, however, are generally inadequate to quantify carbon fluxes across a landscape or ecoregion. Thus, scaling up of flux tower measurements through use of algorithms developed from remote sensing and GIS data is needed for spatial...
Hildebrandt, A. F.; Elleman, D. D.; Whitmore, F. C. (Inventor)
1966-01-01
A magnetic flux pump is described for increasing the intensity of a magnetic field by transferring flux from one location to the magnetic field. The device includes a pair of communicating cavities formed in a block of superconducting material, and a piston for displacing the trapped magnetic flux into the secondary cavity producing a field having an intense flux density.
Radon flux measurement methodologies
International Nuclear Information System (INIS)
Nielson, K.K.; Rogers, V.C.
1984-01-01
Five methods for measuring radon fluxes are evaluated: the accumulator can, a small charcoal sampler, a large-area charcoal sampler, the ''Big Louie'' charcoal sampler, and the charcoal tent sampler. An experimental comparison of the five flux measurement techniques was also conducted. Excellent agreement was obtained between the measured radon fluxes and fluxes predicted from radium and emanation measurements
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
Surface Flux Modeling for Air Quality Applications
Directory of Open Access Journals (Sweden)
Limei Ran
2011-08-01
Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.
Ruzmaikin, A.
1997-01-01
Observations show that newly emerging flux tends to appear on the Solar surface at sites where there is flux already. This results in clustering of solar activity. Standard dynamo theories do not predict this effect.
Adaptive discrete-ordinates algorithms and strategies
International Nuclear Information System (INIS)
Stone, J.C.; Adams, M.L.
2005-01-01
We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)
International Nuclear Information System (INIS)
Madhavi, V.; Phatak, P.R.; Bahadur, C.; Bayala, A.K.; Jakati, R.K.; Sathian, V.
2003-01-01
Full text: A compact size neutron flux monitor has been developed incorporating standard boards developed for smart radiation monitors. The sensitivity of the monitors is 0.4cps/nV. It has been tested up to 2075 nV flux with standard neutron sources. It shows convincing results even in high flux areas like 6m away from the accelerator in RMC (Parel) for 106/107 nV. These monitors have a focal and remote display, alarm function with potential free contacts for centralized control and additional provision of connectivity via RS485/Ethernet. This paper describes the construction, working and results of the above flux monitor
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Energy Technology Data Exchange (ETDEWEB)
Stanev, Todor
2001-05-01
We discuss the primary cosmic ray flux from the point of view of particle interactions and production of atmospheric neutrinos. The overall normalization of the cosmic ray flux and its time variations and site dependence are major ingredients of the atmospheric neutrino predictions and the basis for the derivation of the neutrino oscillation parameters.
Flux cutting in superconductors
International Nuclear Information System (INIS)
Campbell, A M
2011-01-01
This paper describes experiments and theories of flux cutting in superconductors. The use of the flux line picture in free space is discussed. In superconductors cutting can either be by means of flux at an angle to other layers of flux, as in longitudinal current experiments, or due to shearing of the vortex lattice as in grain boundaries in YBCO. Experiments on longitudinal currents can be interpreted in terms of flux rings penetrating axial lines. More physical models of flux cutting are discussed but all predict much larger flux cutting forces than are observed. Also, cutting is occurring at angles between vortices of about one millidegree which is hard to explain. The double critical state model and its developments are discussed in relation to experiments on crossed and rotating fields. A new experiment suggested by Clem gives more direct information. It shows that an elliptical yield surface of the critical state works well, but none of the theoretical proposals for determining the direction of E are universally applicable. It appears that, as soon as any flux flow takes place, cutting also occurs. The conclusion is that new theories are required. (perspective)
Heat flux microsensor measurements
Terrell, J. P.; Hager, J. M.; Onishi, S.; Diller, T. E.
1992-01-01
A thin-film heat flux sensor has been fabricated on a stainless steel substrate. The thermocouple elements of the heat flux sensor were nickel and nichrome, and the temperature resistance sensor was platinum. The completed heat flux microsensor was calibrated at the AEDC radiation facility. The gage output was linear with heat flux with no apparent temperature effect on sensitivity. The gage was used for heat flux measurements at the NASA Langley Vitiated Air Test Facility. Vitiated air was expanded to Mach 3.0 and hydrogen fuel was injected. Measurements were made on the wall of a diverging duct downstream of the injector during all stages of the hydrogen combustion tests. Because the wall and the gage were not actively cooled, the wall temperature reached over 1000 C (1900 F) during the most severe test.
Evaluation of NASA's Carbon Monitoring System (CMS) Flux Pilot: Terrestrial CO2 Fluxes
Fisher, J. B.; Polhamus, A.; Bowman, K. W.; Collatz, G. J.; Potter, C. S.; Lee, M.; Liu, J.; Jung, M.; Reichstein, M.
2011-12-01
NASA's Carbon Monitoring System (CMS) flux pilot project combines NASA's Earth System models in land, ocean and atmosphere to track surface CO2 fluxes. The system is constrained by atmospheric measurements of XCO2 from the Japanese GOSAT satellite, giving a "big picture" view of total CO2 in Earth's atmosphere. Combining two land models (CASA-Ames and CASA-GFED), two ocean models (ECCO2 and NOBM) and two atmospheric chemistry and inversion models (GEOS-5 and GEOS-Chem), the system brings together the stand-alone component models of the Earth System, all of which are run diagnostically constrained by a multitude of other remotely sensed data. Here, we evaluate the biospheric land surface CO2 fluxes (i.e., net ecosystem exchange, NEE) as estimated from the atmospheric flux inversion. We compare against the prior bottom-up estimates (e.g., the CASA models) as well. Our evaluation dataset is the independently derived global wall-to-wall MPI-BGC product, which uses a machine learning algorithm and model tree ensemble to "scale-up" a network of in situ CO2 flux measurements from 253 globally-distributed sites in the FLUXNET network. The measurements are based on the eddy covariance method, which uses observations of co-varying fluxes of CO2 (and water and energy) from instruments on towers extending above ecosystem canopies; the towers integrate fluxes over large spatial areas (~1 km2). We present global maps of CO2 fluxes and differences between products, summaries of fluxes by TRANSCOM region, country, latitude, and biome type, and assess the time series, including timing of minimum and maximum fluxes. This evaluation shows both where the CMS is performing well, and where improvements should be directed in further work.
Software applications for flux balance analysis.
Lakshmanan, Meiyappan; Koh, Geoffrey; Chung, Bevan K S; Lee, Dong-Yup
2014-01-01
Flux balance analysis (FBA) is a widely used computational method for characterizing and engineering intrinsic cellular metabolism. The increasing number of its successful applications and growing popularity are possibly attributable to the availability of specific software tools for FBA. Each tool has its unique features and limitations with respect to operational environment, user-interface and supported analysis algorithms. Presented herein is an in-depth evaluation of currently available FBA applications, focusing mainly on usability, functionality, graphical representation and inter-operability. Overall, most of the applications are able to perform basic features of model creation and FBA simulation. COBRA toolbox, OptFlux and FASIMU are versatile to support advanced in silico algorithms to identify environmental and genetic targets for strain design. SurreyFBA, WEbcoli, Acorn, FAME, GEMSiRV and MetaFluxNet are the distinct tools which provide the user friendly interfaces in model handling. In terms of software architecture, FBA-SimVis and OptFlux have the flexible environments as they enable the plug-in/add-on feature to aid prospective functional extensions. Notably, an increasing trend towards the implementation of more tailored e-services such as central model repository and assistance to collaborative efforts was observed among the web-based applications with the help of advanced web-technologies. Furthermore, most recent applications such as the Model SEED, FAME, MetaFlux and MicrobesFlux have even included several routines to facilitate the reconstruction of genome-scale metabolic models. Finally, a brief discussion on the future directions of FBA applications was made for the benefit of potential tool developers.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Hildebrandt, A. F.; Elleman, D. D.; Whitmore, F. C. (Inventor)
1966-01-01
A method and means for altering the intensity of a magnetic field by transposing flux from one location to the location desired fro the magnetic field are examined. The device described includes a pair of communicating cavities formed in a block of superconducting material, is dimensioned to be insertable into one of the cavities and to substantially fill the cavity. Magnetic flux is first trapped in the cavities by establishing a magnetic field while the superconducting material is above the critical temperature at which it goes superconducting. Thereafter, the temperature of the material is reduced below the critical value, and then the exciting magnetic field may be removed. By varying the ratios of the areas of the two cavities, it is possible to produce a field having much greater flux density in the second, smaller cavity, into which the flux transposed.
2004-01-01
Rahvusvahelise elektroonilise kunsti sümpoosioni ISEA2004 klubiõhtu "Flux in Tallinn" klubis Bon Bon. Eestit esindasid Ropotator, Ars Intel Inc., Urmas Puhkan, Joel Tammik, Taavi Tulev (pseud. Wochtzchee). Klubiõhtu koordinaator Andres Lõo
International Nuclear Information System (INIS)
Hoyer, E.; Chin, J.; Hassenzahl, W.V.
1993-05-01
Undulators for high-performance applications in synchrotron-radiation sources and periodic magnetic structures for free-electron lasers have stringent requirements on the curvature of the electron's average trajectory. Undulators using the permanent magnet hybrid configuration often have fields in their central region that produce a curved trajectory caused by local, ambient magnetic fields such as those of the earth. The 4.6 m long Advanced Light Source (ALS) undulators use flux shunts to reduce this effect. These flux shunts are magnetic linkages of very high permeability material connecting the two steel beams that support the magnetic structures. The shunts reduce the scalar potential difference between the supporting beams and carry substantial flux that would normally appear in the undulator gap. Magnetic design, mechanical configuration of the flux shunts and magnetic measurements of their effect on the ALS undulators are described
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
International Nuclear Information System (INIS)
Oda, Naotaka.
1993-01-01
The device of the present invention greatly saves an analog processing section such as an analog filter and an analog processing circuit. That is, the device of the present invention comprises (1) a neutron flux detection means for detecting neutron fluxed in the reactor, (2) a digital filter means for dividing signals corresponding to the detected neutron fluxes into predetermined frequency band regions, (3) a calculation processing means for applying a calculation processing corresponding to the frequency band regions to the neutron flux detection signals divided by the digital filter means. With such a constitution, since the neutron detection signals are processed by the digital filter means, the accuracy is improved and the change for the property of the filter is facilitated. Further, when a neutron flux level is obtained, a calculation processing corresponding to the frequency band region can be conducted without the analog processing circuit. Accordingly, maintenance and accuracy are improved by greatly decreasing the number of parts. Further, since problems inherent to the analog circuit are solved, neutron fluxes are monitored at high reliability. (I.S.)
Neutron flux monitoring device
International Nuclear Information System (INIS)
Shimazu, Yoichiro.
1995-01-01
In a neutron flux monitoring device, there are disposed a neutron flux measuring means for outputting signals in accordance with the intensity of neutron fluxes, a calculation means for calculating a self power density spectrum at a frequency band suitable to an object to be measured based on the output of the neutron flux measuring means, an alarm set value generation means for outputting an alarm set value as a comparative reference, and an alarm judging means for comparing the alarm set value with the outputted value of the calculation means to judge requirement of generating an alarm and generate an alarm in accordance with the result of the judgement. Namely, the time-series of neutron flux signals is put to fourier transformation for a predetermined period of time by the calculation means, and from each of square sums for real number component and imaginary number component for each of the frequencies, a self power density spectrum in the frequency band suitable to the object to be measured is calculated. Then, when the set reference value is exceeded, an alarm is generated. This can reliably prevent generation of erroneous alarm due to neutron flux noises and can accurately generate an alarm at an appropriate time. (N.H.)
Automated reactivity anomaly surveillance in the Fast Flux Test Facility
International Nuclear Information System (INIS)
Knutson, B.J.; Harris, R.A.; Honeyman, D.J.; Shook, A.T.; Krohn, C.N.
1985-01-01
The automated technique for monitoring core reactivity during power operation used at the Fast Flux Test Facility (FFTF) is described. This technique relies on comparing predicted to measured rod positions to detect any anomalous (or unpredicted) core reactivity changes. It is implemented on the Plant Data System (PDS) computer and, thus, provides rapid indication of any abnormal core conditions. The prediction algorithms use thermal-hydraulic, control rod position and neutron flux sensor information to predict the core reactivity state
Linker, J. A.; Caplan, R. M.; Downs, C.; Riley, P.; Mikic, Z.; Lionello, R.; Henney, C. J.; Arge, C. N.; Liu, Y.; Derosa, M. L.; Yeates, A.; Owens, M. J.
2017-10-01
The heliospheric magnetic field is of pivotal importance in solar and space physics. The field is rooted in the Sun’s photosphere, where it has been observed for many years. Global maps of the solar magnetic field based on full-disk magnetograms are commonly used as boundary conditions for coronal and solar wind models. Two primary observational constraints on the models are (1) the open field regions in the model should approximately correspond to coronal holes (CHs) observed in emission and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. In this study, we calculate both magnetohydrodynamic and potential field source surface solutions using 14 different magnetic maps produced from five different types of observatory magnetograms, for the time period surrounding 2010 July. We have found that for all of the model/map combinations, models that have CH areas close to observations underestimate the interplanetary magnetic flux, or, conversely, for models to match the interplanetary flux, the modeled open field regions are larger than CHs observed in EUV emission. In an alternative approach, we estimate the open magnetic flux entirely from solar observations by combining automatically detected CHs for Carrington rotation 2098 with observatory synoptic magnetic maps. This approach also underestimates the interplanetary magnetic flux. Our results imply that either typical observatory maps underestimate the Sun’s magnetic flux, or a significant portion of the open magnetic flux is not rooted in regions that are obviously dark in EUV and X-ray emission.
Energy Technology Data Exchange (ETDEWEB)
Linker, J. A.; Caplan, R. M.; Downs, C.; Riley, P.; Mikic, Z.; Lionello, R. [Predictive Science Inc., 9990 Mesa Rim Road, Suite 170, San Diego, CA 92121 (United States); Henney, C. J. [Air Force Research Lab/Space Vehicles Directorate, 3550 Aberdeen Avenue SE, Kirtland AFB, NM (United States); Arge, C. N. [Science and Exploration Directorate, NASA/GSFC, Greenbelt, MD 20771 (United States); Liu, Y. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Derosa, M. L. [Lockheed Martin Solar and Astrophysics Laboratory, 3251 Hanover Street B/252, Palo Alto, CA 94304 (United States); Yeates, A. [Department of Mathematical Sciences, Durham University, Durham, DH1 3LE (United Kingdom); Owens, M. J., E-mail: linkerj@predsci.com [Space and Atmospheric Electricity Group, Department of Meteorology, University of Reading, Earley Gate, P.O. Box 243, Reading RG6 6BB (United Kingdom)
2017-10-10
The heliospheric magnetic field is of pivotal importance in solar and space physics. The field is rooted in the Sun’s photosphere, where it has been observed for many years. Global maps of the solar magnetic field based on full-disk magnetograms are commonly used as boundary conditions for coronal and solar wind models. Two primary observational constraints on the models are (1) the open field regions in the model should approximately correspond to coronal holes (CHs) observed in emission and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. In this study, we calculate both magnetohydrodynamic and potential field source surface solutions using 14 different magnetic maps produced from five different types of observatory magnetograms, for the time period surrounding 2010 July. We have found that for all of the model/map combinations, models that have CH areas close to observations underestimate the interplanetary magnetic flux, or, conversely, for models to match the interplanetary flux, the modeled open field regions are larger than CHs observed in EUV emission. In an alternative approach, we estimate the open magnetic flux entirely from solar observations by combining automatically detected CHs for Carrington rotation 2098 with observatory synoptic magnetic maps. This approach also underestimates the interplanetary magnetic flux. Our results imply that either typical observatory maps underestimate the Sun’s magnetic flux, or a significant portion of the open magnetic flux is not rooted in regions that are obviously dark in EUV and X-ray emission.
Meromorphic flux compactification
Energy Technology Data Exchange (ETDEWEB)
Damian, Cesar [Departamento de Ingeniería Mecánica, Universidad de Guanajuato,Carretera Salamanca-Valle de Santiago Km 3.5+1.8 Comunidad de Palo Blanco,Salamanca (Mexico); Loaiza-Brito, Oscar [Departamento de Física, Universidad de Guanajuato,Loma del Bosque No. 103 Col. Lomas del Campestre C.P 37150 León, Guanajuato (Mexico)
2017-04-26
We present exact solutions of four-dimensional Einstein’s equations related to Minkoswki vacuum constructed from Type IIB string theory with non-trivial fluxes. Following https://www.doi.org/10.1007/JHEP02(2015)187; https://www.doi.org/10.1007/JHEP02(2015)188 we study a non-trivial flux compactification on a fibered product by a four-dimensional torus and a two-dimensional sphere punctured by 5- and 7-branes. By considering only 3-form fluxes and the dilaton, as functions on the internal sphere coordinates, we show that these solutions correspond to a family of supersymmetric solutions constructed by the use of G-theory. Meromorphicity on functions constructed in terms of fluxes and warping factors guarantees that flux and 5-brane contributions to the scalar curvature vanish while fulfilling stringent constraints as tadpole cancelation and Bianchi identities. Different Einstein’s solutions are shown to be related by U-dualities. We present three supersymmetric non-trivial Minkowski vacuum solutions and compute the corresponding soft terms. We also construct a non-supersymmetric solution and study its stability.
Meromorphic flux compactification
International Nuclear Information System (INIS)
Damian, Cesar; Loaiza-Brito, Oscar
2017-01-01
We present exact solutions of four-dimensional Einstein’s equations related to Minkoswki vacuum constructed from Type IIB string theory with non-trivial fluxes. Following https://www.doi.org/10.1007/JHEP02(2015)187; https://www.doi.org/10.1007/JHEP02(2015)188 we study a non-trivial flux compactification on a fibered product by a four-dimensional torus and a two-dimensional sphere punctured by 5- and 7-branes. By considering only 3-form fluxes and the dilaton, as functions on the internal sphere coordinates, we show that these solutions correspond to a family of supersymmetric solutions constructed by the use of G-theory. Meromorphicity on functions constructed in terms of fluxes and warping factors guarantees that flux and 5-brane contributions to the scalar curvature vanish while fulfilling stringent constraints as tadpole cancelation and Bianchi identities. Different Einstein’s solutions are shown to be related by U-dualities. We present three supersymmetric non-trivial Minkowski vacuum solutions and compute the corresponding soft terms. We also construct a non-supersymmetric solution and study its stability.
Flux Pinning in Superconductors
Matsushita, Teruo
2007-01-01
The book covers the flux pinning mechanisms and properties and the electromagnetic phenomena caused by the flux pinning common for metallic, high-Tc and MgB2 superconductors. The condensation energy interaction known for normal precipitates or grain boundaries and the kinetic energy interaction proposed for artificial Nb pins in Nb-Ti, etc., are introduced for the pinning mechanism. Summation theories to derive the critical current density are discussed in detail. Irreversible magnetization and AC loss caused by the flux pinning are also discussed. The loss originally stems from the ohmic dissipation of normal electrons in the normal core driven by the electric field induced by the flux motion. The readers will learn why the resultant loss is of hysteresis type in spite of such mechanism. The influence of the flux pinning on the vortex phase diagram in high Tc superconductors is discussed, and the dependencies of the irreversibility field are also described on other quantities such as anisotropy of supercondu...
The converged Sn algorithm for nuclear criticality
International Nuclear Information System (INIS)
Ganapol, B. D.; Hadad, K.
2009-01-01
A new discrete ordinates algorithm to determine the multiplication factor of a 1D nuclear reactor, based on Bengt Carlson's S n method, is presented. The algorithm applies the Romberg and Wynn-epsilon accelerators to accelerate a 1D, one-group S n solution to its asymptotic limit. We demonstrate the feasibility of the Converged Sn (CSn) solution on several one-group criticality benchmark compilations. The new formulation is especially convenient since it enables highly accurate critical fluxes and eigenvalues using the most fundamental transport algorithm. (authors)
The Chandra Source Catalog: Algorithms
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Neutron flux monitoring device
International Nuclear Information System (INIS)
Goto, Yasushi; Mitsubori, Minehisa; Ohashi, Kazunori.
1997-01-01
The present invention provides a neutron flux monitoring device for preventing occurrence of erroneous reactor scram caused by the elevation of the indication of a start region monitor (SRM) due to a factor different from actual increase of neutron fluxes. Namely, judgement based on measured values obtained by a pulse counting method and a judgment based on measured values obtained by a Cambel method are combined. A logic of switching neutron flux measuring method to be used for monitoring, namely, switching to an intermediate region when both of the judgements are valid is adopted. Then, even if the indication value is elevated based on the Cambel method with no increase of the counter rate in a neutron source region, the switching to the intermediate region is not conducted. As a result, erroneous reactor scram such as 'shorter reactor period' can be avoided. (I.S.)
2014-11-14
biomass production. Although maximization of biomass produc- tion as used in E-Flux and FBA has been exploited to great advantage in many simulations and...due to appreciable production of fermentation products, particularly ethanol [37]. The experimentally obtained biomass yields by Lee et al. were 0.020...be larger than a certain level (e.g., 90% in our simulations ) of the theoretical maximal. Features of the E-Fmin Algorithm The main distinguishing
International Nuclear Information System (INIS)
Honda, M.; Kasahara, K.; Hidaka, K.; Midorikawa, S.
1990-02-01
A detailed Monte Carlo simulation of neutrino fluxes of atmospheric origin is made taking into account the muon polarization effect on neutrinos from muon decay. We calculate the fluxes with energies above 3 MeV for future experiments. There still remains a significant discrepancy between the calculated (ν e +antiν e )/(ν μ +antiν μ ) ratio and that observed by the Kamiokande group. However, the ratio evaluated at the Frejus site shows a good agreement with the data. (author)
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
MAGNETIC FLUX CANCELLATION IN ELLERMAN BOMBS
Energy Technology Data Exchange (ETDEWEB)
Reid, A.; Mathioudakis, M.; Nelson, C. J.; Henriques, V. [Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN, Northern Ireland (United Kingdom); Doyle, J. G. [Armagh Observatory, College Hill, Armagh, BT61 9DG (United Kingdom); Scullion, E. [Trinity College Dublin, College Green, Dublin 2 (Ireland); Ray, T., E-mail: areid29@qub.ac.uk [Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2 (Ireland)
2016-06-01
Ellerman Bombs (EBs) are often found to be co-spatial with bipolar photospheric magnetic fields. We use H α imaging spectroscopy along with Fe i 6302.5 Å spectropolarimetry from the Swedish 1 m Solar Telescope (SST), combined with data from the Solar Dynamic Observatory , to study EBs and the evolution of the local magnetic fields at EB locations. EBs are found via an EB detection and tracking algorithm. Using NICOLE inversions of the spectropolarimetric data, we find that, on average, (3.43 ± 0.49) × 10{sup 24} erg of stored magnetic energy disappears from the bipolar region during EB burning. The inversions also show flux cancellation rates of 10{sup 14}–10{sup 15} Mx s{sup −1} and temperature enhancements of 200 K at the detection footpoints. We investigate the near-simultaneous flaring of EBs due to co-temporal flux emergence from a sunspot, which shows a decrease in transverse velocity when interacting with an existing, stationary area of opposite polarity magnetic flux, resulting in the formation of the EBs. We also show that these EBs can be fueled further by additional, faster moving, negative magnetic flux regions.
Radiation flux measuring device
International Nuclear Information System (INIS)
Corte, E.; Maitra, P.
1977-01-01
A radiation flux measuring device is described which employs a differential pair of transistors, the output of which is maintained constant, connected to a radiation detector. Means connected to the differential pair produce a signal representing the log of the a-c component of the radiation detector, thereby providing a signal representing the true root mean square logarithmic output. 3 claims, 2 figures
Soluble organic nutrient fluxes
Robert G. Qualls; Bruce L. Haines; Wayne Swank
2014-01-01
Our objectives in this study were (i) compare fluxes of the dissolved organic nutrients dissolved organic carbon (DOC), DON, and dissolved organic phosphorus (DOP) in a clearcut area and an adjacent mature reference area. (ii) determine whether concentrations of dissolved organic nutrients or inorganic nutrients were greater in clearcut areas than in reference areas,...
Energy Technology Data Exchange (ETDEWEB)
Grassi, Pietro Antonio [CERN, Theory Unit, CH-1211 Geneva, 23 (Switzerland); Marescotti, Matteo [Dipartimento di Fisica Teorica, Universita di Torino, Via Giuria 1, I-10125, Turin (Italy)
2007-01-15
As been recently pointed out, physically relevant models derived from string theory require the presence of non-vanishing form fluxes besides the usual geometrical constraints. In the case of NS-NS fluxes, the Generalized Complex Geometry encodes these informations in a beautiful geometrical structure. On the other hand, the R-R fluxes call for supergeometry as the underlying mathematical framework. In this context, we analyze the possibility of constructing interesting supermanifolds recasting the geometrical data and RR fluxes. To characterize these supermanifolds we have been guided by the fact topological strings on supermanifolds require the super-Ricci flatness of the target space. This can be achieved by adding to a given bosonic manifold enough anticommuting coordinates and new constraints on the bosonic sub-manifold. We study these constraints at the linear and non-linear level for a pure geometrical setting and in the presence of p-form field strengths. We find that certain spaces admit several super-extensions and we give a parameterization in a simple case of d bosonic coordinates and two fermionic coordinates. In addition, we comment on the role of the RR field in the construction of the super-metric. We give several examples based on supergroup manifolds and coset supermanifolds.
International Nuclear Information System (INIS)
Grassi, Pietro Antonio; Marescotti, Matteo
2007-01-01
As been recently pointed out, physically relevant models derived from string theory require the presence of non-vanishing form fluxes besides the usual geometrical constraints. In the case of NS-NS fluxes, the Generalized Complex Geometry encodes these informations in a beautiful geometrical structure. On the other hand, the R-R fluxes call for supergeometry as the underlying mathematical framework. In this context, we analyze the possibility of constructing interesting supermanifolds recasting the geometrical data and RR fluxes. To characterize these supermanifolds we have been guided by the fact topological strings on supermanifolds require the super-Ricci flatness of the target space. This can be achieved by adding to a given bosonic manifold enough anticommuting coordinates and new constraints on the bosonic sub-manifold. We study these constraints at the linear and non-linear level for a pure geometrical setting and in the presence of p-form field strengths. We find that certain spaces admit several super-extensions and we give a parameterization in a simple case of d bosonic coordinates and two fermionic coordinates. In addition, we comment on the role of the RR field in the construction of the super-metric. We give several examples based on supergroup manifolds and coset supermanifolds
International Nuclear Information System (INIS)
Perkins, D.H.
1984-01-01
The atmospheric neutrino fluxes, which are responsible for the main background in proton decay experiments, have been calculated by two independent methods. There are discrepancies between the two sets of results regarding latitude effects and up-down asymmetries, especially for neutrino energies Esub(ν) < 1 GeV. (author)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...
Fourier transform and controlling of flux in scalar hysteresis measurement
International Nuclear Information System (INIS)
Kuczmann, Miklos
2008-01-01
The paper deals with a possible realization of eliminating the effect of noise in scalar hysteresis measurements. The measured signals have been transformed into the frequency domain, and, after applying digital filter, the spectrums of the filtered signals have been transformed back to the time domain. The proposed technique results in an accurate noise-removal algorithm. The paper illustrates a fast controlling algorithm applying the inverse of the actually measured hysteresis loop, and another proportional one to measure distorted flux pattern. By developing the mentioned algorithms, it aims at the controlling of a more complicated phenomena, i.e. measuring the vector hysteresis characteristics
Diffusion piecewise homogenization via flux discontinuity factors
International Nuclear Information System (INIS)
Sanchez, Richard; Zmijarevic, Igor
2011-01-01
We analyze the calculation of flux discontinuity factors (FDFs) for use with piecewise subdomain assembly homogenization. These coefficients depend on the numerical mesh used to compute the diffusion problem. When the mesh has a single degree of freedom on subdomain interfaces the solution is unique and can be computed independently per subdomain. For all other cases we have implemented an iterative calculation for the FDFs. Our numerical results show that there is no solution to this nonlinear problem but that the iterative algorithm converges towards FDFs values that reproduce subdomains reaction rates with a relatively high precision. In our test we have included both the GET and black-box FDFs. (author)
Directory of Open Access Journals (Sweden)
Hyun-Seob Song
Full Text Available Prediction of possible flux distributions in a metabolic network provides detailed phenotypic information that links metabolism to cellular physiology. To estimate metabolic steady-state fluxes, the most common approach is to solve a set of macroscopic mass balance equations subjected to stoichiometric constraints while attempting to optimize an assumed optimal objective function. This assumption is justifiable in specific cases but may be invalid when tested across different conditions, cell populations, or other organisms. With an aim to providing a more consistent and reliable prediction of flux distributions over a wide range of conditions, in this article we propose a framework that uses the flux minimization principle to predict active metabolic pathways from mRNA expression data. The proposed algorithm minimizes a weighted sum of flux magnitudes, while biomass production can be bounded to fit an ample range from very low to very high values according to the analyzed context. We have formulated the flux weights as a function of the corresponding enzyme reaction's gene expression value, enabling the creation of context-specific fluxes based on a generic metabolic network. In case studies of wild-type Saccharomyces cerevisiae, and wild-type and mutant Escherichia coli strains, our method achieved high prediction accuracy, as gauged by correlation coefficients and sums of squared error, with respect to the experimentally measured values. In contrast to other approaches, our method was able to provide quantitative predictions for both model organisms under a variety of conditions. Our approach requires no prior knowledge or assumption of a context-specific metabolic functionality and does not require trial-and-error parameter adjustments. Thus, our framework is of general applicability for modeling the transcription-dependent metabolism of bacteria and yeasts.
Design of a flux buffer based on the flux shuttle
International Nuclear Information System (INIS)
Gershenson, M.
1991-01-01
This paper discusses the design considerations for a flux buffer based on the flux-shuttle concept. Particular attention is given to the issues of flux popping, stability of operation and saturation levels for a large input. Modulation techniques used in order to minimize 1/f noise, in addition to offsets are also analyzed. Advantages over conventional approaches using a SQUID for a flux buffer are discussed. Results of computer simulations are presented
Lobotomy of flux compactifications
Energy Technology Data Exchange (ETDEWEB)
Dibitetto, Giuseppe [Institutionen för fysik och astronomi, University of Uppsala,Box 803, SE-751 08 Uppsala (Sweden); Guarino, Adolfo [Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics,Bern University, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Roest, Diederik [Centre for Theoretical Physics, University of Groningen,Nijenborgh 4 9747 AG Groningen (Netherlands)
2014-05-15
We provide the dictionary between four-dimensional gauged supergravity and type II compactifications on T{sup 6} with metric and gauge fluxes in the absence of supersymmetry breaking sources, such as branes and orientifold planes. Secondly, we prove that there is a unique isotropic compactification allowing for critical points. It corresponds to a type IIA background given by a product of two 3-tori with SO(3) twists and results in a unique theory (gauging) with a non-semisimple gauge algebra. Besides the known four AdS solutions surviving the orientifold projection to N=4 induced by O6-planes, this theory contains a novel AdS solution that requires non-trivial orientifold-odd fluxes, hence being a genuine critical point of the N=8 theory.
Physics of magnetic flux ropes
Russell, C. T.; Priest, E. R.; Lee, L. C.
The present work encompasses papers on the structure, waves, and instabilities of magnetic flux ropes (MFRs), photospheric flux tubes (PFTs), the structure and heating of coronal loops, solar prominences, coronal mass ejections and magnetic clouds, flux ropes in planetary ionospheres, the magnetopause, magnetospheric field-aligned currents and flux tubes, and the magnetotail. Attention is given to the equilibrium of MFRs, resistive instability, magnetic reconnection and turbulence in current sheets, dynamical effects and energy transport in intense flux tubes, waves in solar PFTs, twisted flux ropes in the solar corona, an electrodynamical model of solar flares, filament cooling and condensation in a sheared magnetic field, the magnetopause, the generation of twisted MFRs during magnetic reconnection, ionospheric flux ropes above the South Pole, substorms and MFR structures, evidence for flux ropes in the earth magnetotail, and MFRs in 3D MHD simulations.
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
International Nuclear Information System (INIS)
Williams, D.J.
1990-01-01
Estimates are provided for the amount of methane emitted annually into the atmosphere in Australia for a variety of sources. The sources considered are coal mining, landfill, motor vehicles, natural gas suply system, rice paddies, bushfires, termites, wetland and animals. This assessment indicates that the major sources of methane are natural or agricultural in nature and therefore offer little scope for reduction. Nevertheless the remainder are not trival and reduction of these fluxes could play a significant part in any Australian action on the greenhouse problem. 19 refs., 7 tabs., 1 fig
Development of computational technique for labeling magnetic flux-surfaces
International Nuclear Information System (INIS)
Nunami, Masanori; Kanno, Ryutaro; Satake, Shinsuke; Hayashi, Takaya; Takamaru, Hisanori
2006-03-01
In recent Large Helical Device (LHD) experiments, radial profiles of ion temperature, electric field, etc. are measured in the m/n=1/1 magnetic island produced by island control coils, where m is the poloidal mode number and n the toroidal mode number. When the transport of the plasma in the radial profiles is numerically analyzed, an average over a magnetic flux-surface in the island is a very useful concept to understand the transport. On averaging, a proper labeling of the flux-surfaces is necessary. In general, it is not easy to label the flux-surfaces in the magnetic field with the island, compared with the case of a magnetic field configuration having nested flux-surfaces. In the present paper, we have developed a new computational technique to label the magnetic flux-surfaces. This technique is constructed by using an optimization algorithm, which is known as an optimization method called the simulated annealing method. The flux-surfaces are discerned by using two labels: one is classification of the magnetic field structure, i.e., core, island, ergodic, and outside regions, and the other is a value of the toroidal magnetic flux. We have applied the technique to an LHD configuration with the m/n=1/1 island, and successfully obtained the discrimination of the magnetic field structure. (author)
The Chandra Source Catalog 2.0: Estimating Source Fluxes
Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
International Nuclear Information System (INIS)
Banner, D.
1995-01-01
Critical heat flux (CHF) is of importance for nuclear safety and represents the major limiting factors for reactor cores. Critical heat flux is caused by a sharp reduction in the heat transfer coefficient located at the outer surface of fuel rods. Safety requires that this phenomenon also called the boiling crisis should be precluded under nominal or incidental conditions (Class I and II events). CHF evaluation in reactor cores is basically a two-step approach. Fuel assemblies are first tested in experimental loops in order to determine CHF limits under various flow conditions. Then, core thermal-hydraulic calculations are performed for safety evaluation. The paper will go into more details about the boiling crisis in order to pinpoint complexity and lack of fundamental understanding in many areas. Experimental test sections needed to collect data over wide thermal-hydraulic and geometric ranges are described CHF safety margin evaluation in reactors cores is discussed by presenting how uncertainties are mentioned. From basic considerations to current concerns, the following topics are discussed; knowledge of the boiling crisis, CHF predictors, and advances thermal-hydraulic codes. (authors). 15 refs., 4 figs
International Nuclear Information System (INIS)
Seki, Eiji; Tai, Ichiro.
1984-01-01
Purpose: To maintain the measuring accuracy and the reponse time within an allowable range in accordance with the change of neutron fluxes in a nuclear reactor pressure vessel. Constitution: Neutron fluxes within a nuclear reactor pressure vessel are detected by detectors, converted into pulse signals and amplified in a range switching amplifier. The amplified signals are further converted through an A/D converter and digital signals from the converter are subjected to a square operation in an square operation circuit. The output from the circuit is inputted into an integration circuit to selectively accumulate the constant of 1/2n, 1 - 1/2n (n is a positive integer) respectively for two continuing signals to perform weighing. Then, the addition is carried out to calculate the integrated value and the addition number is changed by the chane in the number n to vary the integrating time. The integrated value is inputted into a control circuit to control the value of n so that the fluctuation and the calculation time for the integrated value are within a predetermined range and, at the same time, the gain of the range switching amplifier is controlled. (Seki, T.)
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Directory of Open Access Journals (Sweden)
José Carlos Mendonça
2012-03-01
Full Text Available Neste estudo foram utilizadas imagens do sensor MODIS e o SEBAL na avaliação de duas proposições para a estimação do fluxo de calor sensível (H, baseadas na seleção dos pixels âncoras utilizados na determinação da diferença da temperatura à superfície (dT. Denominou-se H-CLÁSSICO, a proposição que utilizou pixels com temperaturas extremas, e H-PESAGRO, aquela que adotou o pixel frio para a menor temperatura e o pixel quente para o valor de H obtido como resíduo da equação de Penman-Monteith (FAO56, estimado com dados observados em uma estação agrometeorológica. Os resultados de H estimados pelas duas proposições foram comparados com valores de H obtidos pelo Balanço de Energia (Razão de Bowen sobre uma área cultivada com cana-de-açúcar. Com os resultados obtidos pode-se concluir que a proposição H-PESAGRO necessitou de um menor número de interações para a estabilização dos valores da resistência aerodinâmica (r ah e que os resultados, estimados com a proposição H-CLÁSSICA, apresentaram valores 58,35 % mais elevados do que os estimados pela H-PESAGRO. Quando comparados com os valores de H estimados pelo método da razão de Bowen sobre o pixel da cana-de-açúcar, os coeficientes de correlação foram r = 0,54 e r = 0,71, respectivamente, para as proposições H-CLÁSSICA e H-PESAGRO.Images from the MODIS and SEBAL algorithm were used to evaluate two proposals for estimating sensible heat flux (H based on the selection of anchor pixels used to determine the surface temperature difference (dT. The proposition in which pixels with extreme temperatures were used was called H-CLASSIC. The other one H-PESAGRO adopted for cold pixels the lowest temperature and for the hot pixels the value of H as a residue of the equation of Penman-Monteith FAO 56, using observed data from agrometeorological station. The results showed that the H-PESAGRO required a smaller number of interactions for the stabilization of the
Magnetic flux reconstruction methods for shaped tokamaks
International Nuclear Information System (INIS)
Tsui, Chi-Wa.
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising
Gauge fluxes in F-theory compactifications
Energy Technology Data Exchange (ETDEWEB)
Lin, Ling
2016-07-13
In this thesis, we study the geometry and physics of gauge fluxes in F-theory compactifications to four dimensions. Motivated by the phenomenological requirement of chiral matter in realistic model building scenarios, we develop methods for a systematic analysis of primary vertical G{sub 4}-fluxes on torus-fibred Calabi-Yau fourfolds. In particular, we extend the well-known description of fluxes on elliptic fibrations with sections to the more general set-up of genus-one fibrations with multi-sections. The latter are known to give rise to discrete abelian symmetries in F-theory. We test our proposal for constructing fluxes in such geometries on an explicit model with SU(5) x Z{sub 2} symmetry, which is connected to an ordinary elliptic fibration with SU(5) x U(1) symmetry by a conifold transition. With our methods we systematically verify anomaly cancellation and tadpole matching in both models. Along the way, we find a novel way of understanding anomaly cancellation in 4D F-theory in purely geometric terms. This observation is further strengthened by a similar analysis of an SU(3) x SU(2) x U(1){sup 2} model. The obvious connection of this particular model with the Standard Model is then investigated in a more phenomenologically motivated survey. There, we will first provide possible matchings of the geometric spectrum with the Standard Model states, which highlights the role of the additional U(1) factor as a selection rule. In a second step, we then utilise our novel methods on flux computations to set up a search algorithm for semi-realistic chiral spectra in our Standard- Model-like fibrations over specific base manifolds B. As a demonstration, we scan over three choices P{sup 3}, Bl{sub 1}P{sup 3} and Bl{sub 2}P{sup 3} for the base. As a result we find a consistent flux that gives the chiral Standard Model spectrum with a vector-like triplet exotic, which may be lifted by a Higgs mechanism.
International Nuclear Information System (INIS)
Munn, W.I.
1981-01-01
The Fast Flux Test Facility (FFTF), located on the Hanford site a few miles north of Richland, Washington, is a major link in the chain of development required to sustain and advance Liquid Metal Fast Breeder Reactor (LMFBR) technology in the United States. This 400 MWt sodium cooled reactor is a three loop design, is operated by Westinghouse Hanford Company for the US Department of Energy, and is the largest research reactor of its kind in the world. The purpose of the facility is three-fold: (1) to provide a test bed for components, materials, and breeder reactor fuels which can significantly extend resource reserves; (2) to produce a complete body of base data for the use of liquid sodium in heat transfer systens; and (3) to demonstrate inherent safety characteristics of LMFBR designs
Flux compactifications and generalized geometries
International Nuclear Information System (INIS)
Grana, Mariana
2006-01-01
Following the lectures given at CERN Winter School 2006, we present a pedagogical overview of flux compactifications and generalized geometries, concentrating on closed string fluxes in type II theories. We start by reviewing the supersymmetric flux configurations with maximally symmetric four-dimensional spaces. We then discuss the no-go theorems (and their evasion) for compactifications with fluxes. We analyse the resulting four-dimensional effective theories for Calabi-Yau and Calabi-Yau orientifold compactifications, concentrating on the flux-induced superpotentials. We discuss the generic mechanism of moduli stabilization and illustrate with two examples: the conifold in IIB and a T 6 /(Z 3 x Z 3 ) torus in IIA. We finish by studying the effective action and flux vacua for generalized geometries in the context of generalized complex geometry
Flux compactifications and generalized geometries
Energy Technology Data Exchange (ETDEWEB)
Grana, Mariana [Service de Physique Theorique, CEA/Saclay, 91191 Gif-sur-Yvette Cedex (France)
2006-11-07
Following the lectures given at CERN Winter School 2006, we present a pedagogical overview of flux compactifications and generalized geometries, concentrating on closed string fluxes in type II theories. We start by reviewing the supersymmetric flux configurations with maximally symmetric four-dimensional spaces. We then discuss the no-go theorems (and their evasion) for compactifications with fluxes. We analyse the resulting four-dimensional effective theories for Calabi-Yau and Calabi-Yau orientifold compactifications, concentrating on the flux-induced superpotentials. We discuss the generic mechanism of moduli stabilization and illustrate with two examples: the conifold in IIB and a T{sup 6} /(Z{sub 3} x Z{sub 3}) torus in IIA. We finish by studying the effective action and flux vacua for generalized geometries in the context of generalized complex geometry.
Implicit flux-split schemes for the Euler equations
Thomas, J. L.; Walters, R. W.; Van Leer, B.
1985-01-01
Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.
Heat Flux Instrumentation Laboratory (HFIL)
Federal Laboratory Consortium — Description: The Heat Flux Instrumentation Laboratory is used to develop advanced, flexible, thin film gauge instrumentation for the Air Force Research Laboratory....
KoFlux: Korean Regional Flux Network in AsiaFlux
Kim, J.
2002-12-01
AsiaFlux, the Asian arm of FLUXNET, held the Second International Workshop on Advanced Flux Network and Flux Evaluation in Jeju Island, Korea on 9-11 January 2002. In order to facilitate comprehensive Asia-wide studies of ecosystem fluxes, the meeting launched KoFlux, a new Korean regional network of long-term micrometeorological flux sites. For a successful assessment of carbon exchange between terrestrial ecosystems and the atmosphere, an accurate measurement of surface fluxes of energy and water is one of the prerequisites. During the 7th Global Energy and Water Cycle Experiment (GEWEX) Asian Monsoon Experiment (GAME) held in Nagoya, Japan on 1-2 October 2001, the Implementation Committee of the Coordinated Enhanced Observing Period (CEOP) was established. One of the immediate tasks of CEOP was and is to identify the reference sites to monitor energy and water fluxes over the Asian continent. Subsequently, to advance the regional and global network of these reference sites in the context of both FLUXNET and CEOP, the Korean flux community has re-organized the available resources to establish a new regional network, KoFlux. We have built up domestic network sites (equipped with wind profiler and radiosonde measurements) over deciduous and coniferous forests, urban and rural rice paddies and coastal farmland. As an outreach through collaborations with research groups in Japan, China and Thailand, we also proposed international flux sites at ecologically and climatologically important locations such as a prairie on the Tibetan plateau, tropical forest with mixed and rapid land use change in northern Thailand. Several sites in KoFlux already begun to accumulate interesting data and some highlights are presented at the meeting. The sciences generated by flux networks in other continents have proven the worthiness of a global array of micrometeorological flux towers. It is our intent that the launch of KoFlux would encourage other scientists to initiate and
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Is X-ray emissivity constant on magnetic flux surfaces?
International Nuclear Information System (INIS)
Granetz, R.S.; Borras, M.C.
1997-01-01
Knowledge of the elongations and shifts of internal magnetic flux surfaces can be used to determine the q profile in elongated tokamak plasmas. X-ray tomography is thought to be a reasonable technique for independently measuring internal flux surface shapes, because it is widely believed that X-ray emissivity should be constant on a magnetic flux surface. In the Alcator C-Mod tokamak, the X-ray tomography diagnostic system consists of four arrays of 38 chords each. A comparison of reconstructed X-ray contours with magnetic flux surfaces shows a small but consistent discrepancy in the radial profile of elongation. Numerous computational tests have been performed to verify these findings, including tests of the sensitivity to calibration and viewing geometry errors, the accuracy of the tomography reconstruction algorithms, and other subtler effects. We conclude that the discrepancy between the X-ray contours and the magnetic flux surfaces is real, leading to the conclusion that X-ray emissivity is not exactly constant on a flux surface. (orig.)
Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.
Yamada, N; Nishikawa, T
2010-06-21
In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
Nonequilibrium molecular dynamics theory, algorithms and applications
Todd, Billy D
2017-01-01
Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...
Flux trapping in superconducting cavities
International Nuclear Information System (INIS)
Vallet, C.; Bolore, M.; Bonin, B.; Charrier, J.P.; Daillant, B.; Gratadour, J.; Koechlin, F.; Safa, H.
1992-01-01
The flux trapped in various field cooled Nb and Pb samples has been measured. For ambient fields smaller than 3 Gauss, 100% of the flux is trapped. The consequences of this result on the behavior of superconducting RF cavities are discussed. (author) 12 refs.; 2 figs
DEFF Research Database (Denmark)
Gonzalez-Franquesa, Alba; Patti, Mary-Elizabeth
2018-01-01
Merging transcriptomics or metabolomics data remains insufficient for metabolic flux estimation. Ramirez et al. integrate a genome-scale metabolic model with extracellular flux data to predict and validate metabolic differences between white and brown adipose tissue. This method allows both metab...
Data Acquisition and Flux Calculations
DEFF Research Database (Denmark)
Rebmann, C.; Kolle, O; Heinesch, B
2012-01-01
In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Solar proton fluxes since 1956
International Nuclear Information System (INIS)
Reedy, R.C.
1977-01-01
The fluxes of protons emitted during solar flares since 1956 were evaluated. The depth-versus-activity profiles of 56 Co in several lunar rocks are consistent with the solar-proton fluxes detected by experiments on several satellites. Only about 20% of the solar-proton-induced activities of 22 Na and 55 Fe in lunar rocks from early Apollo missions were produced by protons emitted from the sun during solar cycle 20 (1965--1975). The depth-versus-activity data for these radionuclides in several lunar rocks were used to determine the fluxes of protons during solar cycle 19 (1954--1964). The average proton fluxes for cycle 19 are about five times those for both the last million years and for cycle 20. These solar-proton flux variations correlate with changes in sunspot activity
AmeriFlux Site and Data Exploration System
Krassovski, M.; Boden, T.; Yang, B.; Jackson, B.
2011-12-01
The AmeriFlux network was established in 1996. The network provides continuous observations of ecosystem-level exchanges of CO2, water, energy and momentum spanning diurnal, synoptic, seasonal, and interannual time scales. The current network, including both active and inactive sites, consists of 141 sites in North, Central, and South America. The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) provides data management support for the AmeriFlux network including long-term data storage and dissemination. AmeriFlux offers a broad suite of value-added data products: Level 1 data products at 30 minute or hourly time intervals provided by the site teams, Level 2 data processed by CDIAC and Level 3 and 4 files created using CarboEurope algorithms. CDIAC has developed a relational database to house the vast array of AmeriFlux data and information and a web-based interface to the database, the AmeriFlux Site and Data Exploration System (http://ameriflux.ornl.gov), to help users worldwide identify, and more recently, download desired AmeriFlux data. AmeriFlux and CDIAC offer numerous value-added AmeriFlux data products (i.e., Level 1-4 data products, biological data) and most of these data products are or will be available through the new data system. Vital site information (e.g., location coordinates, dominant species, land-use history) is also displayed in the new system. The data system provides numerous ways to explore and extract data. Searches can be done by site, location, measurement status, available data products, vegetation types, and by reported measurements just to name a few. Data can be accessed through the links to full data sets reported by a site, organized by types of data products, or by creating customized datasets based on user search criteria. The new AmeriFlux download module contains features intended to ease compliance of the AmeriFlux fair-use data policy, acknowledge the contributions of submitting
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Flux surface shape and current profile optimization in tokamaks
International Nuclear Information System (INIS)
Dobrott, D.R.; Miller, R.L.
1977-01-01
Axisymmetric tokamak equilibria of noncircular cross section are analyzed numerically to study the effects of flux surface shape and current profile on ideal and resistive interchange stability. Various current profiles are examined for circles, ellipses, dees, and doublets. A numerical code separately analyzes stability in the neighborhood of the magnetic axis and in the remainder of the plasma using the criteria of Mercier and Glasser, Greene, and Johnson. Results are interpreted in terms of flux surface averaged quantities such as magnetic well, shear, and the spatial variation in the magnetic field energy density over the cross section. The maximum stable β is found to vary significantly with shape and current profile. For current profiles varying linearly with poloidal flux, the highest β's found were for doublets. Finally, an algorithm is presented which optimizes the current profile for circles and dees by making the plasma everywhere marginally stable
Parameter optimization for surface flux transport models
Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.
2017-11-01
Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.
Fractional flux excitations and flux creep in a superconducting film
International Nuclear Information System (INIS)
Lyuksyutov, I.F.
1995-01-01
We consider the transport properties of a modulated superconducting film in a magnetic field parallel to the film. Modulation can be either intrinsic, due to the layered structure of the high-T c superconductors, or artificial, e.g. due to thickness modulation. This system has an infinite set ( >) of pinned phases. In the pinned phase the excitation of flux loops with a fractional number of flux quanta by the applied current j results in flux creep with a generated voltage V ∝ exp[-jo/j[. (orig.)
Heat and Flux. Enabling the Wind Turbine Controller
Energy Technology Data Exchange (ETDEWEB)
Schaak, P. [ECN Wind Energy, Petten (Netherlands)
2006-09-15
In the years 1999-2003 ECN invented and patented the technique 'Heat and Flux'. The idea behind Heat and Flux is that tuning turbines at the windward side of a wind farm more transparent than usual, i.e. realising an axial induction factor below the Lanchester-Betz optimum of 1/3, should raise net farm production and lower mechanical turbine loading without causing draw-backs. For scaled farms in a boundary layer wind tunnel this hypothesis has been proved in previous projects. To enable alternative turbine transparencies, the wind turbine controller must support the additional control aim 'desired transparency'. During this study we have determined a general method to design a transparency control algorithm. This method has been implemented in ECN's 'Control Tool' for designing wind turbine control algorithms. The aero-elastic wind turbine code Phatas has been used to verify the resulting control algorithm. Heat and Flux does not fundamentally change the control of horizontal axis variable speed wind turbines. The axial induction can be reduced by an offset on blade pitch or generator torque. Weighing reliability against performance profits, it appeared to be advisable to adapt only blade angle control.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Monte Carlo surface flux tallies
International Nuclear Information System (INIS)
Favorite, Jeffrey A.
2010-01-01
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
International Nuclear Information System (INIS)
Kulacsy, K.; Lux, I.
1997-01-01
A new, approximate method is given to calculate the in-core flux from the current of SPNDs, with a delay of only a few seconds. The stability of this stepwise algorithm is proven to be satisfactory, and the results of tests performed both on synthetic and on real data are presented. The reconstructed flux is found to follow both steady state and transient fluxes well. (author)
Development of a Thermal Equilibrium Prediction Algorithm
International Nuclear Information System (INIS)
Aviles-Ramos, Cuauhtemoc
2002-01-01
A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Conical electromagnetic radiation flux concentrator
Miller, E. R.
1972-01-01
Concentrator provides method of concentrating a beam of electromagnetic radiation into a smaller beam, presenting a higher flux density. Smaller beam may be made larger by sending radiation through the device in the reverse direction.
Physics of Magnetic Flux Ropes
Priest, E R; Lee, L C
1990-01-01
The American Geophysical Union Chapman Conference on the Physics of Magnetic Flux Ropes was held at the Hamilton Princess Hotel, Hamilton, Bermuda on March 27–31, 1989. Topics discussed ranged from solar flux ropes, such as photospheric flux tubes, coronal loops and prominences, to flux ropes in the solar wind, in planetary ionospheres, at the Earth's magnetopause, in the geomagnetic tail and deep in the Earth's magnetosphere. Papers presented at that conference form the nucleus of this book, but the book is more than just a proceedings of the conference. We have solicited articles from all interested in this topic. Thus, there is some material in the book not discussed at the conference. Even in the case of papers presented at the conference, there is generally a much more detailed and rigorous presentation than was possible in the time allowed by the oral and poster presentations.
Notes on neutron flux measurement
International Nuclear Information System (INIS)
Alcala Ruiz, F.
1984-01-01
The main purpose of this work is to get an useful guide to carry out topical neutron flux measurements. Although the foil activation technique is used in the majority of the cases, other techniques, such as those based on fission chambers and self-powered neutron detectors, are also shown. Special interest is given to the description and application of corrections on the measurement of relative and absolute induced activities by several types of detectors (scintillators, G-M and gas proportional counters). The thermal arid epithermal neutron fluxes, as determined in this work, are conventional or effective (West cots fluxes), which are extensively used by the reactor experimentalists; however, we also give some expressions where they are related to the integrated neutron fluxes, which are used in neutron calculations. (Author) 16 refs
Specification of ROP flux shape
International Nuclear Information System (INIS)
Min, Byung Joo; Gray, A.
1997-06-01
The CANDU 9 480/SEU core uses 0.9% SEU (Slightly Enriched Uranium) fuel. The use f SEU fuel enables the reactor to increase the radial power form factor from 0.865, which is typical in current natural uranium CANDU reactors, to 0.97 in the nominal CANDU 9 480/SEU core. The difference is a 12% increase in reactor power. An additional 5% increase can be achieved due to a reduced refuelling ripple. The channel power limits were also increased by 3% for a total reactor power increase of 20%. This report describes the calculation of neutron flux distributions in the CANDU 9 480/SEU core under conditions specified by the C and I engineers. The RFSP code was used to calculate of neutron flux shapes for ROP analysis. Detailed flux values at numerous potential detector sites were calculated for each flux shape. (author). 6 tabs., 70 figs., 4 refs
Specification of ROP flux shape
Energy Technology Data Exchange (ETDEWEB)
Min, Byung Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Gray, A [Atomic Energy of Canada Ltd., Chalk River, ON (Canada)
1997-06-01
The CANDU 9 480/SEU core uses 0.9% SEU (Slightly Enriched Uranium) fuel. The use f SEU fuel enables the reactor to increase the radial power form factor from 0.865, which is typical in current natural uranium CANDU reactors, to 0.97 in the nominal CANDU 9 480/SEU core. The difference is a 12% increase in reactor power. An additional 5% increase can be achieved due to a reduced refuelling ripple. The channel power limits were also increased by 3% for a total reactor power increase of 20%. This report describes the calculation of neutron flux distributions in the CANDU 9 480/SEU core under conditions specified by the C and I engineers. The RFSP code was used to calculate of neutron flux shapes for ROP analysis. Detailed flux values at numerous potential detector sites were calculated for each flux shape. (author). 6 tabs., 70 figs., 4 refs.
High Flux Isotope Reactor (HFIR)
Federal Laboratory Consortium — The HFIR at Oak Ridge National Laboratory is a light-water cooled and moderated reactor that is the United States’ highest flux reactor-based neutron source. HFIR...
Flux networks in metabolic graphs
International Nuclear Information System (INIS)
Warren, P B; Queiros, S M Duarte; Jones, J L
2009-01-01
A metabolic model can be represented as a bipartite graph comprising linked reaction and metabolite nodes. Here it is shown how a network of conserved fluxes can be assigned to the edges of such a graph by combining the reaction fluxes with a conserved metabolite property such as molecular weight. A similar flux network can be constructed by combining the primal and dual solutions to the linear programming problem that typically arises in constraint-based modelling. Such constructions may help with the visualization of flux distributions in complex metabolic networks. The analysis also explains the strong correlation observed between metabolite shadow prices (the dual linear programming variables) and conserved metabolite properties. The methods were applied to recent metabolic models for Escherichia coli, Saccharomyces cerevisiae and Methanosarcina barkeri. Detailed results are reported for E. coli; similar results were found for other organisms
Boundary fluxes for nonlocal diffusion
Cortazar, Carmen; Elgueta, Manuel; Rossi, Julio D.; Wolanski, Noemi
We study a nonlocal diffusion operator in a bounded smooth domain prescribing the flux through the boundary. This problem may be seen as a generalization of the usual Neumann problem for the heat equation. First, we prove existence, uniqueness and a comparison principle. Next, we study the behavior of solutions for some prescribed boundary data including blowing up ones. Finally, we look at a nonlinear flux boundary condition.
International Nuclear Information System (INIS)
Wotzak, G.P.; Kostin, M.D.
1976-01-01
The process in which hot atoms collide with thermal atoms of a gas, transfer kinetic energy to them, and produce additional hot atoms is investigated. A stochastic method is used to obtain numerical results for the spatial and time dependent energy flux of hot atoms in a gas. The results indicate that in hot atom systems a front followed by an intense energy flux of hot atoms may develop
Flux tubes at finite temperature
Energy Technology Data Exchange (ETDEWEB)
Cea, Paolo [INFN, Sezione di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Dipartimento di Fisica dell’Università di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Cosmai, Leonardo [INFN, Sezione di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Cuteri, Francesca; Papa, Alessandro [Dipartimento di Fisica, Università della Calabria & INFN-Cosenza,Ponte Bucci, cubo 31C, I-87036 Rende (Cosenza) (Italy)
2016-06-07
The chromoelectric field generated by a static quark-antiquark pair, with its peculiar tube-like shape, can be nicely described, at zero temperature, within the dual superconductor scenario for the QCD confining vacuum. In this work we investigate, by lattice Monte Carlo simulations of the SU(3) pure gauge theory, the fate of chromoelectric flux tubes across the deconfinement transition. We find that, if the distance between the static sources is kept fixed at about 0.76 fm ≃1.6/√σ and the temperature is increased towards and above the deconfinement temperature T{sub c}, the amplitude of the field inside the flux tube gets smaller, while the shape of the flux tube does not vary appreciably across deconfinement. This scenario with flux-tube “evaporation” above T{sub c} has no correspondence in ordinary (type-II) superconductivity, where instead the transition to the phase with normal conductivity is characterized by a divergent fattening of flux tubes as the transition temperature is approached from below. We present also some evidence about the existence of flux-tube structures in the magnetic sector of the theory in the deconfined phase.
Energy Technology Data Exchange (ETDEWEB)
Lombardo, Davide M. [Dipartimento di Fisica, Università di Roma “La Sapienza”,Piazzale Aldo Moro 2, 00185 Roma (Italy); Riccioni, Fabio [INFN - Sezione di Roma, Dipartimento di Fisica, Università di Roma “La Sapienza”,Piazzale Aldo Moro 2, 00185 Roma (Italy); Risoli, Stefano [Dipartimento di Fisica, Università di Roma “La Sapienza”,Piazzale Aldo Moro 2, 00185 Roma (Italy); INFN - Sezione di Roma, Dipartimento di Fisica, Università di Roma “La Sapienza”,Piazzale Aldo Moro 2, 00185 Roma (Italy)
2016-12-21
We consider the N=1 superpotential generated in type-II orientifold models by non-geometric fluxes. In particular, we focus on the family of P fluxes, that are related by T-duality transformations to the S-dual of the Q flux. We determine the general rule that transforms a given flux in this family under a single T-duality transformation. This rule allows to derive a complete expression for the superpotential for both the IIA and the IIB theory for the particular case of a T{sup 6}/[ℤ{sub 2}×ℤ{sub 2}] orientifold. We then consider how these fluxes modify the generalised Bianchi identities. In particular, we derive a fully consistent set of quadratic constraints coming from the NS-NS Bianchi identities. On the other hand, the P flux Bianchi identities induce tadpoles, and we determine a set of exotic branes that can be consistently included in order to cancel them. This is achieved by determining a universal transformation rule under T-duality satisfied by all the branes in string theory.
International Nuclear Information System (INIS)
Lombardo, Davide M.; Riccioni, Fabio; Risoli, Stefano
2016-01-01
We consider the N=1 superpotential generated in type-II orientifold models by non-geometric fluxes. In particular, we focus on the family of P fluxes, that are related by T-duality transformations to the S-dual of the Q flux. We determine the general rule that transforms a given flux in this family under a single T-duality transformation. This rule allows to derive a complete expression for the superpotential for both the IIA and the IIB theory for the particular case of a T 6 /[ℤ 2 ×ℤ 2 ] orientifold. We then consider how these fluxes modify the generalised Bianchi identities. In particular, we derive a fully consistent set of quadratic constraints coming from the NS-NS Bianchi identities. On the other hand, the P flux Bianchi identities induce tadpoles, and we determine a set of exotic branes that can be consistently included in order to cancel them. This is achieved by determining a universal transformation rule under T-duality satisfied by all the branes in string theory.
New resonance cross section calculational algorithms
International Nuclear Information System (INIS)
Mathews, D.R.
1978-01-01
Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Kriging-based algorithm for nuclear reactor neutronic design optimization
International Nuclear Information System (INIS)
Kempf, Stephanie; Forget, Benoit; Hu, Lin-Wen
2012-01-01
Highlights: ► A Kriging-based algorithm was selected to guide research reactor optimization. ► We examined impacts of parameter values upon the algorithm. ► The best parameter values were incorporated into a set of best practices. ► Algorithm with best practices used to optimize thermal flux of concept. ► Final design produces thermal flux 30% higher than other 5 MW reactors. - Abstract: Kriging, a geospatial interpolation technique, has been used in the present work to drive a search-and-optimization algorithm which produces the optimum geometric parameters for a 5 MW research reactor design. The technique has been demonstrated to produce an optimal neutronic solution after a relatively small number of core calculations. It has additionally been successful in producing a design which significantly improves thermal neutron fluxes by 30% over existing reactors of the same power rating. Best practices for use of this algorithm in reactor design were identified and indicated the importance of selecting proper correlation functions.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Directory of Open Access Journals (Sweden)
Chang-Seok Park
2017-09-01
Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.
Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models
Energy Technology Data Exchange (ETDEWEB)
Chudnovsky, V
2000-03-01
I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system.
Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models
International Nuclear Information System (INIS)
Chudnovsky, V.
2000-01-01
I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system
Practical modifications to photon planning algorithms to handle asymmetric collimators. 142
International Nuclear Information System (INIS)
Stevens, P.H.
1987-01-01
Current linear accelerators have flattening filters designed to give a uniform dose at depth in water. The resulting variation in photon flux and mean energy across the beam must be accounted for when designing algorithms that include dependent movement of collimators. A suitable algorithm is described based on measurements at 6 and 24 MeV. 2 refs.; 3 figs.; 1 table
Flux flow and flux dynamics in high-Tc superconductors
International Nuclear Information System (INIS)
Bennett, L.H.; Turchinskaya, M.; Swartzendruber, L.J.; Roitburd, A.; Lundy, D.; Ritter, J.; Kaiser, D.L.
1991-01-01
Because high temperature superconductors, including BYCO and BSSCO, are type 2 superconductors with relatively low H(sub c 1) values and high H(sub c 2) values, they will be in a critical state for many of their applications. In the critical state, with the applied field between H(sub c 1) and H(sub c 2), flux lines have penetrated the material and can form a flux lattice and can be pinned by structural defects, chemical inhomogeneities, and impurities. A detailed knowledge of how flux penetrates the material and its behavior under the influence of applied fields and current flow, and the effect of material processing on these properties, is required in order to apply, and to improve the properties of these superconductors. When the applied field is changed rapidly, the time dependence of flux change can be divided into three regions, an initial region which occurs very rapidly, a second region in which the magnetization has a 1n(t) behavior, and a saturation region at very long times. A critical field is defined for depinning, H(sub c,p) as that field at which the hysteresis loop changes from irreversible to reversible. As a function of temperature, it is found that H(sub c,p) is well described by a power law with an exponent between 1.5 and 2.5. The behavior of H(sub c,p) for various materials and its relationship to flux flow and flux dynamics are discussed
Prediction of soil CO2 flux in sugarcane management systems using the Random Forest approach
Directory of Open Access Journals (Sweden)
Rose Luiza Moraes Tavares
Full Text Available ABSTRACT: The Random Forest algorithm is a data mining technique used for classifying attributes in order of importance to explain the variation in an attribute-target, as soil CO2 flux. This study aimed to identify prediction of soil CO2 flux variables in management systems of sugarcane through the machine-learning algorithm called Random Forest. Two different management areas of sugarcane in the state of São Paulo, Brazil, were selected: burned and green. In each area, we assembled a sampling grid with 81 georeferenced points to assess soil CO2 flux through automated portable soil gas chamber with measuring spectroscopy in the infrared during the dry season of 2011 and the rainy season of 2012. In addition, we sampled the soil to evaluate physical, chemical, and microbiological attributes. For data interpretation, we used the Random Forest algorithm, based on the combination of predicted decision trees (machine learning algorithms in which every tree depends on the values of a random vector sampled independently with the same distribution to all the trees of the forest. The results indicated that clay content in the soil was the most important attribute to explain the CO2 flux in the areas studied during the evaluated period. The use of the Random Forest algorithm originated a model with a good fit (R2 = 0.80 for predicted and observed values.
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Thermality of the Hawking flux
Energy Technology Data Exchange (ETDEWEB)
Visser, Matt [School of Mathematics, Statistics, and Operations Research,Victoria University of Wellington, PO Box 600, Wellington 6140 (New Zealand)
2015-07-03
Is the Hawking flux “thermal”? Unfortunately, the answer to this seemingly innocent question depends on a number of often unstated, but quite crucial, technical assumptions built into modern (mis-)interpretations of the word “thermal”. The original 1850’s notions of thermality — based on classical thermodynamic reasoning applied to idealized “black bodies” or “lamp black surfaces” — when supplemented by specific basic quantum ideas from the early 1900’s, immediately led to the notion of the black-body spectrum, (the Planck-shaped spectrum), but without any specific assumptions or conclusions regarding correlations between the quanta. Many (not all) modern authors (often implicitly and unintentionally) add an extra, quite unnecessary, assumption that there are no correlations in the black-body radiation; but such usage is profoundly ahistorical and dangerously misleading. Specifically, the Hawking flux from an evaporating black hole, (just like the radiation flux from a leaky furnace or a burning lump of coal), is only approximately Planck-shaped over an explicitly bounded range of frequencies. Standard physics (phase space and adiabaticity effects) explicitly bound the frequency range over which the Hawking flux is approximately Planck-shaped from both above and below — the Hawking flux is certainly not exactly Planckian, and there is no compelling physics reason to assume the Hawking photons are uncorrelated.
Thermality of the Hawking flux
International Nuclear Information System (INIS)
Visser, Matt
2015-01-01
Is the Hawking flux “thermal”? Unfortunately, the answer to this seemingly innocent question depends on a number of often unstated, but quite crucial, technical assumptions built into modern (mis-)interpretations of the word “thermal”. The original 1850’s notions of thermality — based on classical thermodynamic reasoning applied to idealized “black bodies” or “lamp black surfaces” — when supplemented by specific basic quantum ideas from the early 1900’s, immediately led to the notion of the black-body spectrum, (the Planck-shaped spectrum), but without any specific assumptions or conclusions regarding correlations between the quanta. Many (not all) modern authors (often implicitly and unintentionally) add an extra, quite unnecessary, assumption that there are no correlations in the black-body radiation; but such usage is profoundly ahistorical and dangerously misleading. Specifically, the Hawking flux from an evaporating black hole, (just like the radiation flux from a leaky furnace or a burning lump of coal), is only approximately Planck-shaped over an explicitly bounded range of frequencies. Standard physics (phase space and adiabaticity effects) explicitly bound the frequency range over which the Hawking flux is approximately Planck-shaped from both above and below — the Hawking flux is certainly not exactly Planckian, and there is no compelling physics reason to assume the Hawking photons are uncorrelated.
Physics of magnetic flux tubes
Ryutova, Margarita
2015-01-01
This book is the first account of the physics of magnetic flux tubes from their fundamental properties to collective phenomena in an ensembles of flux tubes. The physics of magnetic flux tubes is absolutely vital for understanding fundamental physical processes in the solar atmosphere shaped and governed by magnetic fields. High-resolution and high cadence observations from recent space and ground-based instruments taken simultaneously at different heights and temperatures not only show the ubiquity of filamentary structure formation but also allow to study how various events are interconnected by system of magnetic flux tubes. The book covers both theory and observations. Theoretical models presented in analytical and phenomenological forms are tailored for practical applications. These are welded with state-of-the-art observations from early decisive ones to the most recent data that open a new phase-space for exploring the Sun and sun-like stars. Concept of magnetic flux tubes is central to various magn...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
A Spatial-Temporal Comparison of Lake Mendota CO2 Fluxes and Collection Methods
Baldocchi, A. K.; Reed, D. E.; Desai, A. R.; Loken, L. C.; Schramm, P.; Stanley, E. H.
2017-12-01
Monitoring of carbon fluxes at the lake/atmosphere interface can help us determine baselines from which to understand responses in both space and time that may result from our warming climate or increasing nutrient inputs. Since recent research has shown lakes to be hotspots of global carbon cycling, it is important to quantify carbon sink and source dynamics as well as to verify observations between multiple methods in the context of long-term data collection efforts. Here we evaluate a new method for measuring space and time variation in CO2 fluxes based on novel speedboat-based collection method of aquatic greenhouse gas concentrations and a flux computation and interpolation algorithm. Two-hundred and forty-nine consecutive days of spatial flux maps over the 2016 open ice period were compared to ongoing eddy covariance tower flux measurements on the shore of Lake Mendota, Wisconsin US using a flux footprint analysis. Spatial and temporal alignments of the fluxes from these two observational datasets revealed both similar trends from daily to seasonal timescales as well as biases between methods. For example, throughout the Spring carbon fluxes showed strong correlation although off by an order of magnitude. Isolating physical patterns of agreement between the two methods of the lake/atmosphere CO2 fluxes allows us to pinpoint where biology and physical drivers contribute to the global carbon cycle and help improve modelling of lakes and utilize lakes as leading indicators of climate change.
Flux driven turbulence in tokamaks
International Nuclear Information System (INIS)
Garbet, X.; Ghendrih, P.; Ottaviani, M.; Sarazin, Y.; Beyer, P.; Benkadda, S.; Waltz, R.E.
1999-01-01
This work deals with tokamak plasma turbulence in the case where fluxes are fixed and profiles are allowed to fluctuate. These systems are intermittent. In particular, radially propagating fronts, are usually observed over a broad range of time and spatial scales. The existence of these fronts provide a way to understand the fast transport events sometimes observed in tokamaks. It is also shown that the confinement scaling law can still be of the gyroBohm type in spite of these large scale transport events. Some departure from the gyroBohm prediction is observed at low flux, i.e. when the gradients are close to the instability threshold. Finally, it is found that the diffusivity is not the same for a turbulence calculated at fixed flux than at fixed temperature gradient, with the same time averaged profile. (author)
Methane flux from boreal peatlands
International Nuclear Information System (INIS)
Crill, P.; Bartlett, K.; Roulet, N.
1992-01-01
The peatlands in the boreal zone (roughly 45 deg - 60 degN) store a significant reservoir of carbon, much of which is potentially available for exchange with the atmosphere. The anaerobic conditions that cause these soils to accumulate carbon also makes wet, boreal peatlands significant sources of methane to the global troposphere. It is estimated that boreal wetlands contribute approximately 19.5 Tg methane per year. The data available on the magnitude of boreal methane emissions have rapidly accumulated in the past twenty years. This paper offers a short review of the flux measured (with range roughly 1 - 2000 mg methane/m2d), considers environmental controls of the flux and briefly discusses how climate change might affect future fluxes
Wide range neutron flux monitor
International Nuclear Information System (INIS)
Endo, Yorimasa; Fukushima, Toshiki.
1983-01-01
Purpose: To provide a wide range neutron-flux monitor adapted such that the flux monitoring function and alarming function can automatically by shifted from pulse counting system to cambel method system. Constitution: A wide range neutron-flux monitor comprises (la) pulse counting system and (lb) cambel-method system for inputting detection signals from neutron detectors and separating them into signals for the pulse measuring system and the cambel measuring system, (2) overlap detection and calculation circuit for detecting the existence of the overlap of two output signals from the (la) and (lb) systems, and (3) trip circuit for judging the abnormal state of neutron detectors upon input of the detection signals. (Seki, T.)
High heat flux facility GLADIS
International Nuclear Information System (INIS)
Greuner, H.; Boeswirth, B.; Boscary, J.; McNeely, P.
2007-01-01
The new ion beam facility GLADIS started the operation at IPP Garching. The facility is equipped with two individual 1.1 MW power ion sources for testing actively cooled plasma facing components under high heat fluxes. Each ion source generates heat loads between 3 and 55 MW/m 2 with a beam diameter of 70 mm at the target position. These parameters allow effective testing from probes to large components up to 2 m length. The high heat flux allows the target to be installed inclined to the beam and thus increases the heated surface length up to 200 mm for a heat flux of 15 MW/m 2 in the standard operating regime. Thus the facility has the potential capability for testing of full scale ITER divertor targets. Heat load tests on the WENDELSTEIN 7-X pre-series divertor targets have been successfully started. These tests will validate the design and manufacturing for the production of 950 elements
Heat flux driven ion turbulence
International Nuclear Information System (INIS)
Garbet, X.
1998-01-01
This work is an analysis of an ion turbulence in a tokamak in the case where the thermal flux is fixed and the temperature profile is allowed to fluctuate. The system exhibits some features of Self-Organized Critical systems. In particular, avalanches are observed. Also the frequency spectrum of the thermal flux exhibits a structure similar to the one of a sand pile automaton, including a 1/f behavior. However, the time average temperature profile is found to be supercritical, i.e. the temperature gradient stays above the critical value. Moreover, the heat diffusivity is lower for a turbulence calculated at fixed flux than a fixed temperature gradient, with the same time average temperature. This behavior is attributed to a stabilizing effect of avalanches. (author)
Ideal flux field dielectric concentrators.
García-Botella, Angel
2011-10-01
The concept of the vector flux field was first introduced as a photometrical theory and later developed in the field of nonimaging optics; it has provided new perspectives in the design of concentrators, overcoming standard ray tracing techniques. The flux field method has shown that reflective concentrators with the geometry of the field lines achieve the theoretical limit of concentration. In this paper we study the role of surfaces orthogonal to the field vector J. For rotationally symmetric systems J is orthogonal to its curl, and then a family of surfaces orthogonal to the lines of J exists, which can be called the family of surfaces of constant pseudopotential. Using the concept of the flux tube, it is possible to demonstrate that refractive concentrators with the shape of these pseudopotential surfaces achieve the theoretical limit of concentration.
Flux flow and flux creep in thick films of YBCO. [Y-Ba-Cu-O
Energy Technology Data Exchange (ETDEWEB)
Rickets, J.; Vinen, W.F.; Abell, J.S.; Shields, T.C. (Superconductivity Research Group, Univ. of Birmingham (United Kingdom))
1991-12-01
The results are described of new experiments designed to study flux creep and flux flow along a single flux percolation path in thick films of YBCO. The flux flow regime is studied by a four-point resistive technique using pulsed currents, and the flux creep regime by observing the rate at which flux enters a superconducting loop in parallel with the resistance that is associated with the flux percolation path. (orig.).
The flux database concerted action
International Nuclear Information System (INIS)
Mitchell, N.G.; Donnelly, C.E.
1999-01-01
This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)
Advances on geometric flux optical design method
García-Botella, Ángel; Fernández-Balbuena, Antonio Álvarez; Vázquez, Daniel
2017-09-01
Nonimaging optics is focused on the study of methods to design concentrators or illuminators systems. It can be included in the area of photometry and radiometry and it is governed by the laws of geometrical optics. The field vector method, which starts with the definition of the irradiance vector E, is one of the techniques used in nonimaging optics. Called "Geometrical flux vector" it has provide ideal designs. The main property of this model is, its ability to estimate how radiant energy is transferred by the optical system, from the concepts of field line, flux tube and pseudopotential surface, overcoming traditional raytrace methods. Nevertheless this model has been developed only at an academic level, where characteristic optical parameters are ideal not real and the studied geometries are simple. The main objective of the present paper is the application of the vector field method to the analysis and design of real concentration and illumination systems. We propose the development of a calculation tool for optical simulations by vector field, using algorithms based on Fermat`s principle, as an alternative to traditional tools for optical simulations by raytrace, based on reflection and refraction law. This new tool provides, first, traditional simulations results: efficiency, illuminance/irradiance calculations, angular distribution of light- with lower computation time, photometrical information needs about a few tens of field lines, in comparison with million rays needed nowadays. On the other hand the tool will provides new information as vector field maps produced by the system, composed by field lines and quasipotential surfaces. We show our first results with the vector field simulation tool.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
International Nuclear Information System (INIS)
Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.
2005-01-01
The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Simple models with ALICE fluxes
Striet, J
2000-01-01
We introduce two simple models which feature an Alice electrodynamics phase. In a well defined sense the Alice flux solutions we obtain in these models obey first order equations similar to those of the Nielsen-Olesen fluxtube in the abelian higgs model in the Bogomol'nyi limit. Some numerical solutions are presented as well.
Generalized diffusion theory for calculating the neutron transport scalar flux
International Nuclear Information System (INIS)
Alcouffe, R.E.
1975-01-01
A generalization of the neutron diffusion equation is introduced, the solution of which is an accurate approximation to the transport scalar flux. In this generalization the auxiliary transport calculations of the system of interest are utilized to compute an accurate, pointwise diffusion coefficient. A procedure is specified to generate and improve this auxiliary information in a systematic way, leading to improvement in the calculated diffusion scalar flux. This improvement is shown to be contingent upon satisfying the condition of positive calculated-diffusion coefficients, and an algorithm that ensures this positivity is presented. The generalized diffusion theory is also shown to be compatible with conventional diffusion theory in the sense that the same methods and codes can be used to calculate a solution for both. The accuracy of the method compared to reference S/sub N/ transport calculations is demonstrated for a wide variety of examples. (U.S.)
Flux-weakening control methods for hybrid excitation synchronous motor
Directory of Open Access Journals (Sweden)
Mingming Huang
2015-09-01
Full Text Available The hybrid excitation synchronous motor (HESM, which aim at combining the advantages of permanent magnet motor and wound excitation motor, have the characteristics of low-speed high-torque hill climbing and wide speed range. Firstly, a new kind of HESM is presented in the paper, and its structure and mathematical model are illustrated. Then, based on a space voltage vector control, a novel flux-weakening method for speed adjustment in the high speed region is presented. The unique feature of the proposed control method is that the HESM driving system keeps the q-axis back-EMF components invariable during the flux-weakening operation process. Moreover, a copper loss minimization algorithm is adopted to reduce the copper loss of the HESM in the high speed region. Lastly, the proposed method is validated by the simulation and the experimental results.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Directory of Open Access Journals (Sweden)
Yixiong Lu
2013-09-01
Full Text Available This study examines the modelled surface turbulent fluxes over sea ice from the bulk algorithms of the Beijing Climate Centre Climate System Model (BCC_CSM, the European Centre for Medium-Range Weather Forecasts (ECMWF model and the Community Earth System Model (CESM with data from the fourth Chinese National Arctic Research Expedition (CHINARE 2010 and the Surface Heat Budget of the Arctic Ocean (SHEBA experiment. Of all the model algorithms, wind stresses are replicated well and have small annual biases (−0.6% in BCC_CSM, 0.2% in CESM and 17% in ECMWF with observations, annual sensible heat fluxes are consistently underestimated by 83–141%, and annual latent heat fluxes are generally overestimated by 49–73%. Five sets of stability functions for stable stratification are evaluated based on theoretical and observational analyses, and the superior stability functions are employed in a new bulk algorithm proposal, which also features varying roughness lengths. Compared to BCC_CSM, the new algorithm can estimate the friction velocity with significantly reduced bias, 84% smaller in winter and 56% smaller in summer, respectively. For the sensible heat flux, the bias of the new algorithm is 30% smaller in winter and 19% smaller in summer than that of BCC_CSM. Finally, the bias of modelled latent heat fluxes is 27% smaller in summer.
Solar Modulation of Inner Trapped Belt Radiation Flux as a Function of Atmospheric Density
Lodhi, M. A. K.
2005-01-01
No simple algorithm seems to exist for calculating proton fluxes and lifetimes in the Earth's inner, trapped radiation belt throughout the solar cycle. Most models of the inner trapped belt in use depend upon AP8 which only describes the radiation environment at solar maximum and solar minimum in Cycle 20. One exception is NOAAPRO which incorporates flight data from the TIROS/NOAA polar orbiting spacecraft. The present study discloses yet another, simple formulation for approximating proton fluxes at any time in a given solar cycle, in particular between solar maximum and solar minimum. It is derived from AP8 using a regression algorithm technique from nuclear physics. From flux and its time integral fluence, one can then approximate dose rate and its time integral dose.
Directory of Open Access Journals (Sweden)
Saratram Gopalakrishnan
2015-12-01
Full Text Available In this study, the Elementary Metabolite Unit (EMU algorithm was employed to calculate intracellular fluxes for Chlorella protothecoides using previously generated growth and mass spec data. While the flux through glycolysis remained relatively constant, the pentose phosphate pathway (PPP flux increased from 3% to 20% of the glucose uptake during nitrogen-limited growth. The TCA cycle flux decreased from 94% to 38% during nitrogen-limited growth while the flux of acetyl-CoA into lipids increased from 58% to 109% of the glucose uptake, increasing total lipid accumulation. Phosphoenolpyruvate carboxylase (PEPCase activity was higher during nitrogen-sufficient growth. The glyoxylate shunt was found to be partially active in both cases, indicating the nutrient nature has an impact on flux distribution. It was found that the total NADPH supply within the cell remained almost constant under both conditions. In summary, algal cells substantially reorganize their metabolism during the switch from carbon-limited (nitrogen-sufficient to nitrogen-limited (carbon-sufficient growth. Keywords: Microalgae, Biofuels, Chlorella, MFA, EMU algorithm
OptFlux: an open-source software platform for in silico metabolic engineering.
Rocha, Isabel; Maia, Paulo; Evangelista, Pedro; Vilaça, Paulo; Soares, Simão; Pinto, José P; Nielsen, Jens; Patil, Kiran R; Ferreira, Eugénio C; Rocha, Miguel
2010-04-19
Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization
Flavour mixings in flux compactifications
International Nuclear Information System (INIS)
Buchmuller, Wilfried; Schweizer, Julian
2017-01-01
A multiplicity of quark-lepton families can naturally arise as zero-modes in flux compactifications. The flavour structure of quark and lepton mass matrices is then determined by the wave function profiles of the zero-modes. We consider a supersymmetric SO(10) x U(1) model in six dimensions compactified on the orbifold T 2 =Z 2 with Abelian magnetic flux. A bulk 16-plet charged under the U(1) provides the quark-lepton generations whereas two uncharged 10-plets yield two Higgs doublets. Bulk anomaly cancellation requires the presence of additional 16- and 10-plets. The corresponding zero-modes form vectorlike split multiplets that are needed to obtain a successful flavour phenomenology. We analyze the pattern of flavour mixings for the two heaviest families of the Standard Model and discuss possible generalizations to three and more generations.
Superconducting flux flow digital circuits
International Nuclear Information System (INIS)
Martens, J.S.; Zipperian, T.E.; Hietala, V.M.; Ginley, D.S.; Tigges, C.P.; Phillips, J.M.; Siegal, M.P.
1993-01-01
The authors have developed a family of digital logic circuits based on superconducting flux flow transistors that show high speed, reasonable signal levels, large fan-out, and large noise margins. The circuits are made from high-temperature superconductors (HTS) and have been shown to operate at over 90 K. NOR gates have been demonstrated with fan-outs of more than 5 and fully loaded switching times less than a fixture-limited 50 ps. Ring-oscillator data suggest inverter delay times of about 40ps when using a 3-μm linewidths. Simple flip-flops have also been demonstrated showing large noise margins, response times of less than 30 ps, and static power dissipation on the order of 30 nW. Among other uses, this logic family is appropriate as an interface between logic families such as single flux quantum and conventional semiconductor logic
Heisenberg groups and noncommutative fluxes
International Nuclear Information System (INIS)
Freed, Daniel S.; Moore, Gregory W.; Segal, Graeme
2007-01-01
We develop a group-theoretical approach to the formulation of generalized abelian gauge theories, such as those appearing in string theory and M-theory. We explore several applications of this approach. First, we show that there is an uncertainty relation which obstructs simultaneous measurement of electric and magnetic flux when torsion fluxes are included. Next, we show how to define the Hilbert space of a self-dual field. The Hilbert space is Z 2 -graded and we show that, in general, self-dual theories (including the RR fields of string theory) have fermionic sectors. We indicate how rational conformal field theories associated to the two-dimensional Gaussian model generalize to (4k+2)-dimensional conformal field theories. When our ideas are applied to the RR fields of string theory we learn that it is impossible to measure the K-theory class of a RR field. Only the reduction modulo torsion can be measured
Neutron flux enhancement at LASREF
International Nuclear Information System (INIS)
Sommer, W.F.; Ferguson, P.D.; Wechsler, M.S.
1992-01-01
The accelerator at the Los Alamos Meson Physiscs Facility produces a 1 mA beam of protons at an energy of 800 MeV. Since 1985, the Los Alamos Spallation Radiation Effects Facility (LASREF) has made use of the neutron flux that is generated as the incident protons interact with the targets and a copper beam stop. A variety of basic and applied experiments in radiation damage and radiation effects have been completed. Recent studies indicate that the flux at LASREF can be increased by at least a factor of 10 from the present level of about 5 E + 17 m -2 s -1 . This requires changing the beam stop material from Cu to W and optimizing the geometry of the beam-target interaction region. These studies are motivated by the need for a large volume, high energy, and high intensity neutron source in the development of materials for advanced energy concepts such as fusion reactors. (orig.)
International Nuclear Information System (INIS)
Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel
2016-01-01
Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.
Absolute flux scale for radioastronomy
International Nuclear Information System (INIS)
Ivanov, V.P.; Stankevich, K.S.
1986-01-01
The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized
Rapid reconnection of flux lines
International Nuclear Information System (INIS)
Samain, A.
1982-01-01
The rapid reconnection of flux lines in an incompressible fluid through a singular layer of the current density is discussed. It is shown that the liberated magnetic energy must partially appear in the form of plasma kinetic energy. A laminar structure of the flow is possible, but Alfven velocity must be achieved in eddies of growing size at the ends of the layer. The gross structure of the flow and the magnetic configuration may be obtained from variational principles. (author)
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Neutron flux control systems validation
International Nuclear Information System (INIS)
Hascik, R.
2003-01-01
In nuclear installations main requirement is to obtain corresponding nuclear safety in all operation conditions. From the nuclear safety point of view is commissioning and start-up after reactor refuelling appropriate period for safety systems verification. In this paper, methodology, performance and results of neutron flux measurements systems validation is presented. Standard neutron flux measuring chains incorporated into the reactor protection and control system are used. Standard neutron flux measuring chain contains detector, preamplifier, wiring to data acquisition unit, data acquisition unit, wiring to display at control room and display at control room. During reactor outage only data acquisition unit and wiring and displaying at reactor control room is verified. It is impossible to verify detector, preamplifier and wiring to data acquisition recording unit during reactor refuelling according to low power. Adjustment and accurate functionality of these chains is confirmed by start-up rate (SUR) measurement during start-up tests after refuelling of the reactors. This measurement has direct impact to nuclear safety and increase operational nuclear safety level. Briefly description of each measuring system is given. Results are illustrated on measurements performed at Bohunice NPP during reactor start-up tests. Main failures and their elimination are described (Authors)
Surface fluxes in heterogeneous landscape
Energy Technology Data Exchange (ETDEWEB)
Bay Hasager, C
1997-01-01
The surface fluxes in homogeneous landscapes are calculated by similarity scaling principles. The methodology is well establish. In heterogeneous landscapes with spatial changes in the micro scale range, i e from 100 m to 10 km, advective effects are significant. The present work focus on these effects in an agricultural countryside typical for the midlatitudes. Meteorological and satellite data from a highly heterogeneous landscape in the Rhine Valley, Germany was collected in the large-scale field experiment TRACT (Transport of pollutants over complex terrain) in 1992. Classified satellite images, Landsat TM and ERS SAR, are used as basis for roughness maps. The roughnesses were measured at meteorological masts in the various cover classes and assigned pixel by pixel to the images. The roughness maps are aggregated, i e spatially averaged, into so-called effective roughness lengths. This calculation is performed by a micro scale aggregation model. The model solves the linearized atmospheric flow equations by a numerical (Fast Fourier Transform) method. This model also calculate maps of friction velocity and momentum flux pixel wise in heterogeneous landscapes. It is indicated how the aggregation methodology can be used to calculate the heat fluxes based on the relevant satellite data i e temperature and soil moisture information. (au) 10 tabs., 49 ills., 223 refs.
Generalized drift-flux correlation
International Nuclear Information System (INIS)
Takeuchi, K.; Young, M.Y.; Hochreiter, L.E.
1991-01-01
A one-dimensional drift-flux model with five conservation equations is frequently employed in major computer codes, such as TRAC-PD2, and in simulator codes. In this method, the relative velocity between liquid and vapor phases, or slip ratio, is given by correlations, rather than by direct solution of the phasic momentum equations, as in the case of the two-fluid model used in TRAC-PF1. The correlations for churn-turbulent bubbly flow and slug flow regimes were given in terms of drift velocities by Zuber and Findlay. For the annular flow regime, the drift velocity correlations were developed by Ishii et al., using interphasic force balances. Another approach is to define the drift velocity so that flooding and liquid hold-up conditions are properly simulated, as reported here. The generalized correlation is used to reanalyze the MB-2 test data for two-phase flow in a large-diameter pipe. The results are applied to the generalized drift flux velocity, whose relationship to the other correlations is discussed. Finally, the generalized drift flux correlation is implemented in TRAC-PD2. Flow reversal from countercurrent to cocurrent flow is computed in small-diameter U-shaped tubes and is compared with the flooding curve
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Force sensor using changes in magnetic flux
Pickens, Herman L. (Inventor); Richard, James A. (Inventor)
2012-01-01
A force sensor includes a magnetostrictive material and a magnetic field generator positioned in proximity thereto. A magnetic field is induced in and surrounding the magnetostrictive material such that lines of magnetic flux pass through the magnetostrictive material. A sensor positioned in the vicinity of the magnetostrictive material measures changes in one of flux angle and flux density when the magnetostrictive material experiences an applied force that is aligned with the lines of magnetic flux.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
Reluctance motor employing superconducting magnetic flux switches
International Nuclear Information System (INIS)
Spyker, R.L.; Ruckstadter, E.J.
1992-01-01
This paper reports that superconducting flux switches controlling the magnetic flux in the poles of a motor will enable the implementation of a reluctance motor using one central single phase winding. A superconducting flux switch consists of a ring of superconducting material surrounding a ferromagnetic pole of the motor. When in the superconducting state the switch will block all magnetic flux attempting to flow in the ferromagnetic core. When switched to the normal state the superconducting switch will allow the magnetic flux to flow freely in that pole. By using one high turns-count coil as a flux generator, and selectively channeling flux among the various poles using the superconducting flux switch, 3-phase operation can be emulated with a single-hase central AC source. The motor will also operate when the flux generating coil is driven by a DC current, provided the magnetic flux switches see a continuously varying magnetic flux. Rotor rotation provides this varying flux due to the change in stator pole inductance it produces
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
Solar flux incident on an orbiting surface after reflection from a planet
Modest, M. F.
1980-01-01
Algorithms describing the solar radiation impinging on an infinitesimal surface after reflection from a gray and diffuse planet are derived. The following conditions apply: only radiation from the sunny half of the planet is taken into account; the radiation must fall on the top of the orbiting surface, and radiation must come from that part of the planet that can be seen from the orbiting body. A simple approximate formula is presented which displays excellent accuracy for all significant situations, with an error which is always less than 5% of the maximum possible reflected flux. Attention is also given to solar albedo flux on a surface directly facing the planet, the influence of solar position on albedo flux, and to solar albedo flux as a function of the surface-planet tilt angle.
International Nuclear Information System (INIS)
Huang, C.-H.; Wu, H.-H.
2006-01-01
In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study
Space-Time Transformation in Flux-form Semi-Lagrangian Schemes
Directory of Open Access Journals (Sweden)
Peter C. Chu Chenwu Fan
2010-01-01
Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
International Nuclear Information System (INIS)
Nascimento, F.M.; Sergeenkov, S.; Araujo-Moreira, F.M.
2012-01-01
By using a specially designed algorithm (based on utilizing the so-called Hierarchical Data Format), we report on successful reconstruction of 3D profiles of local flux distribution within artificially prepared arrays of unshunted Nb-AlO x -Nb Josephson junctions from 2D surface images obtained via the scanning SQUID microscope. The analysis of the obtained results suggest that for large sweep areas, the local flux distribution significantly deviates from the conventional picture and exhibits a more complicated avalanche-type behavior with a prominent dendritic structure. -- Highlights: ► The penetration of external magnetic field into an array of Nb-AlO x -Nb Josephson junctions is studied. ► Using Scanning SQUID Microscope, 2D images of local flux distribution within array are obtained. ► Using specially designed pattern recognition algorithm, 3D flux profiles are reconstructed from 2D images.
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
Flux of Cadmium through Euphausiids
International Nuclear Information System (INIS)
Benayoun, G.; Fowler, S.W.; Oregioni, B.
1976-01-01
Flux of the heavy metal cadmium through the euphausiid Meganyctiphanes norvegica was examined. Radiotracer experiments showed that cadmium can be accumulated either directly from water or through the food chain. When comparing equilibrium cadmium concentration factors based on stable element measurements with those obtained from radiotracer experiments, it is evident that exchange between cadmium in the water and that in euphausiid tissue is a relatively slow process, indicating that, in the long term, ingestion of cadmium will probably be the more important route for the accumulation of this metal. Approximately 10% of cadmium ingested by euphausiids was incorporated into internal tissues when the food source was radioactive Artemia. After 1 month cadmium, accumulated directly from water, was found to be most concentrated in the viscera with lesser amounts in eyes, exoskeleton and muscle, respectively. Use of a simple model, based on the assumption that cadmium taken in by the organism must equal cadmium released plus that accumulated in tissue, allowed assessment of the relative importance of various metabolic parameters in controlling the cadmium flux through euphausiids. Fecal pellets, due to their relatively high rate of production and high cadmium content, accounted for 84% of the total cadmium flux through M. norvegica. Comparisons of stable cadmium concentrations in natural euphausiid food and the organism's resultant fecal pellets indicate that the cadmium concentration in ingested material was increased nearly 5-fold during its passage through the euphausiid. From comparisons of all routes by which cadmium can be released from M. norvegica to the water column, it is concluded that fecal pellet deposition represents the principal mechanism effecting the downward vertical transport of cadmium by this species. (author)
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Framework for Flux Qubit Design
Yan, Fei; Kamal, Archana; Krantz, Philip; Campbell, Daniel; Kim, David; Yoder, Jonilyn; Orlando, Terry; Gustavsson, Simon; Oliver, William; Engineering Quantum Systems Team
A qubit design for higher performance relies on the understanding of how various qubit properties are related to design parameters. We construct a framework for understanding the qubit design in the flux regime. We explore different parameter regimes, looking for features desirable for certain purpose in the context of quantum computing. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) via MIT Lincoln Laboratory under Air Force Contract No. FA8721-05-C-0002.
FSFE: Fake Spectra Flux Extractor
Bird, Simeon
2017-10-01
The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.
International Nuclear Information System (INIS)
Wiegand, W.J. Jr.; Bullis, R.H.; Mongeon, R.J.
1980-01-01
A flowmeter based on ion drift techniques was developed for measuring the rate of flow of a fluid through a given cross-section. Ion collectors are positioned on each side of an immediately adjacent to ion source. When air flows axially through the region in which ions are produced and appropriate electric fields are maintained between the collectors, an electric current flows to each collector due to the net motion of the ions. The electric currents and voltages and other parameters which define the flow are combined in an electric circuit so that the flux of the fluid can be determined. (DN)
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Scalar flux modeling in turbulent flames using iterative deconvolution
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
Directory of Open Access Journals (Sweden)
Marc Aubinet
1997-01-01
Full Text Available Différent methods of measurement of momentum and sensible heat flux densifies are presented and compared above a gras covered fallow. The aerodynamic (AD and eddy covariance (EC methods are presented and compared for both momentum and sensible heat measurements. In addition, the temperature fluctuation (TF method is compared to the HEC method for the sensible heat flux measurement. The AD and EC methods are in good agreement for the momentum flux measurements. For the sensible heat flux, the AD method is very sensible to temperature errors. So it is unusable during night and gives biased estimations during the day. The TF method gives only estimations of the sensible heat flux. It is in good agreement with the EC method during the day but diverges completely during night, being unable to disceming positive from négative fluxes. From the three methods, the EC method is the sole that allows to measure continuously both momentum and sensible heat flux but it requires a loud data treatment. We présent in this paper the algorithm used for this treatment.
Directory of Open Access Journals (Sweden)
Tyler W. H. Backman
2018-01-01
Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Triode for magnetic flux quanta.
Vlasko-Vlasov, Vitalii; Colauto, Fabiano; Benseman, Timothy; Rosenmann, Daniel; Kwok, Wai-Kwong
We designed a magnetic vortex triode using an array of closely spaced soft magnetic Py strips on top of a Nb superconducting film. The strips act similar to the grid electrode in an electronic triode, where the electron flow is regulated by the grid potential. In our case, we tune the vortex motion by the magnetic charge potential of the strip edges, using a small magnetic field rotating in the film plane. The magnetic charges emerging at the stripe edges and proportional to the magnetization component perpendicular to the edge direction, form linear potential barriers or valleys for vortex motion in the superconducting layer. We directly imaged the normal flux penetration into the Py/Nb films and observed retarded or accelerated entry of the normal vortices depending on the in-plane magnetization direction in the stripes. The observed flux behavior is explained by interactions between magnetically charged lines and magnetic monopoles of vortices similar to those between electrically charged strings and point charges. We discuss the possibility of using our design for manipulation of individual vortices in high-speed, low-power superconducting electronic circuits. This work was supported by the U.S. DOE, Office of Science, Materials Sciences and Engineering Division, and Office of BES (contract DE-AC02-06CH11357). F. Colauto thanks the Sao Paulo Research Foundation FAPESP (Grant No. 2015/06.085-3).
Neutron flux enhancement at LASREF
International Nuclear Information System (INIS)
Sommer, W.F.; Ferguson, P.D.; Wechsler, M.S.
1991-01-01
The accelerator at the Los Alamos Meson Physics Facility produces a 1-mA beam of protons at an energy of 800 MeV. Since 1985, the Los Alamos Spallation Radiation Effects Facility (LASREF) has made use of the neutron flux that is generated as the incident protons interact with the nuclei in targets and a copper beam stop. A variety of basic and applied experiments in radiation damage and radiation effects have been completed. Recent studies indicate that the flux at LASREF can be increased by at least a factor of ten from the present level of about 5 E+17 m -2 s -1 . This requires changing the beam-stop material from Cu to W and optimizing the geometry of the beam-target interaction region. These studies are motivated by the need for a large volume, high energy, and high intensity neutron source in the development of materials for advanced energy concepts such as fusion reactors. 18 refs., 7 figs., 2 tabs
Neutron flux enhancement at LASREF
Energy Technology Data Exchange (ETDEWEB)
Sommer, W.F. (Los Alamos National Lab., Los Alamos, NM (United States)); Ferguson, P.D. (Univ. of Missouri, Rolla, MO (United States)); Wechsler, M.S. (Iowa State Univ., Ames, IA (United States))
1992-09-01
The accelerator at the Los Alamos Meson Physiscs Facility produces a 1 mA beam of protons at an energy of 800 MeV. Since 1985, the Los Alamos Spallation Radiation Effects Facility (LASREF) has made use of the neutron flux that is generated as the incident protons interact with the targets and a copper beam stop. A variety of basic and applied experiments in radiation damage and radiation effects have been completed. Recent studies indicate that the flux at LASREF can be increased by at least a factor of 10 from the present level of about 5 E + 17 m[sup -2] s[sup -1]. This requires changing the beam stop material from Cu to W and optimizing the geometry of the beam-target interaction region. These studies are motivated by the need for a large volume, high energy, and high intensity neutron source in the development of materials for advanced energy concepts such as fusion reactors. (orig.).
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Methods and applications in high flux neutron imaging
International Nuclear Information System (INIS)
Ballhausen, H.
2007-01-01
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
YANA – a software tool for analyzing flux modes, gene-expression and enzyme activities
Directory of Open Access Journals (Sweden)
Engels Bernd
2005-06-01
Full Text Available Abstract Background A number of algorithms for steady state analysis of metabolic networks have been developed over the years. Of these, Elementary Mode Analysis (EMA has proven especially useful. Despite its low user-friendliness, METATOOL as a reliable high-performance implementation of the algorithm has been the instrument of choice up to now. As reported here, the analysis of metabolic networks has been improved by an editor and analyzer of metabolic flux modes. Analysis routines for expression levels and the most central, well connected metabolites and their metabolic connections are of particular interest. Results YANA features a platform-independent, dedicated toolbox for metabolic networks with a graphical user interface to calculate (integrating METATOOL, edit (including support for the SBML format, visualize, centralize, and compare elementary flux modes. Further, YANA calculates expected flux distributions for a given Elementary Mode (EM activity pattern and vice versa. Moreover, a dissection algorithm, a centralization algorithm, and an average diameter routine can be used to simplify and analyze complex networks. Proteomics or gene expression data give a rough indication of some individual enzyme activities, whereas the complete flux distribution in the network is often not known. As such data are noisy, YANA features a fast evolutionary algorithm (EA for the prediction of EM activities with minimum error, including alerts for inconsistent experimental data. We offer the possibility to include further known constraints (e.g. growth constraints in the EA calculation process. The redox metabolism around glutathione reductase serves as an illustration example. All software and documentation are available for download at http://yana.bioapps.biozentrum.uni-wuerzburg.de. Conclusion A graphical toolbox and an editor for METATOOL as well as a series of additional routines for metabolic network analyses constitute a new user
Photon Counting Using Edge-Detection Algorithm
Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.
2010-01-01
New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1
Improved 3-D turbomachinery CFD algorithm
Janus, J. Mark; Whitfield, David L.
1988-01-01
The building blocks of a computer algorithm developed for the time-accurate flow analysis of rotating machines are described. The flow model is a finite volume method utilizing a high resolution approximate Riemann solver for interface flux definitions. This block LU implicit numerical scheme possesses apparent unconditional stability. Multi-block composite gridding is used to orderly partition the field into a specified arrangement. Block interfaces, including dynamic interfaces, are treated such as to mimic interior block communication. Special attention is given to the reduction of in-core memory requirements by placing the burden on secondary storage media. Broad applicability is implied, although the results presented are restricted to that of an even blade count configuration. Several other configurations are presently under investigation, the results of which will appear in subsequent publications.
E-core transverse flux machine with integrated fault detection system
DEFF Research Database (Denmark)
Rasmussen, Peter Omand; Runólfsson, Gunnar; Thorsdóttir, Thórunn Ágústa
2011-01-01
extent also thermal. Since the E-core transverse flux-machine belongs to the family of the SRMs it has unique properties of intervals without current in the windings. By careful investigation of the voltage and current in these intervals a very simple method to detect single and partial turn short...... circuit faults have been developed. For other types of machines the single and partial turn short circuit is very difficult to deal with and requires normally very comprehensive detection and calculation schemes. The developed detection algorithm combined with the E-core transverse flux machine...
Heat flux management via advanced magnetic divertor configurations and divertor detachment
Energy Technology Data Exchange (ETDEWEB)
Kolemen, E., E-mail: ekolemen@princeton.edu [Princeton University, Princeton, NJ 08544 (United States); Allen, S.L. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bray, B.D. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Fenstermacher, M.E. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Humphreys, D.A.; Hyatt, A.W. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Lasnier, C.J. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Leonard, A.W. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Makowski, M.A.; McLean, A.G. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Maingi, R.; Nazikian, R. [Princeton Plasma Physics Laboratory, Princeton, NJ 08543 (United States); Petrie, T.W. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Soukhanovskii, V.A. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Unterberg, E.A. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831 (United States)
2015-08-15
The snowflake divertor (SFD) control and detachment control to manage the heat flux at the divertor are successfully demonstrated at DIII-D. Results of the development and implementation of these two heat flux reduction control methods are presented. The SFD control algorithm calculates the position of the two null-points in real-time and controls shaping coil currents to achieve and stabilize various snowflake configurations. Detachment control stabilizes the detachment front fixed at specified distance between the strike point and the X-point throughout the shot.
Anderson, Ray; Skaggs, Todd; Alfieri, Joseph; Kustas, William; Wang, Dong; Ayars, James
2016-04-01
Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes, Eddy Covariance (EC), can directly measure net, combined water and carbon fluxes (evapotranspiration and net ecosystem exchange/productivity). Analysis of the correlation structure of high frequency EC time series (hereafter flux partitioning or FP) has been proposed to directly partition net EC fluxes into their constituent components using leaf-level water use efficiency (WUE) data to separate stomatal and non-stomatal transport processes. FP has significant logistical and spatial representativeness advantages over other partitioning approaches (e.g. isotopic fluxes, sap flow, microlysimeters), but the performance of the FP algorithm is reliant on the accuracy of the intercellular CO2 (ci) concentration used to parameterize WUE for each flux averaging interval. In this study, we tested several parameterizations for ci as a function of atmospheric CO2 (ca), including (1) a constant ci/ca ratio for C3 and C4 photosynthetic pathway plants, (2) species-specific ci/ca-Vapor Pressure Deficit (VPD) relationships (quadratic and linear), and (3) generalized C3 and C4 photosynthetic pathway ci/ca-VPD relationships. We tested these ci parameterizations at three agricultural EC towers from 2011-present in C4 and C3 crops (sugarcane - Saccharum officinarum L. and peach - Prunus persica), and validated again sap-flow sensors installed at the peach site. The peach results show that the species-specific parameterizations driven FP algorithm came to convergence significantly more frequently (~20% more frequently) than the constant ci/ca ratio or generic C3-VPD relationship. The FP algorithm parameterizations with a generic VPD relationship also had slightly higher transpiration (5 Wm-2
International Nuclear Information System (INIS)
Smirnov, A; Alekseev, G; Korablev, A; Esau, I
2010-01-01
The Nordic Seas are an important area of the World Ocean where warm Atlantic waters penetrate far north forming the mild climate of Northern Europe. These waters represent the northern rim of the global thermohaline circulation. Estimates of the relationships between the net heat flux and mixed layer properties in the Nordic Seas are examined. Oceanographic data are derived from the Oceanographic Data Base (ODB) compiled in the Arctic and Antarctic Research Institute. Ocean weather ship 'Mike' (OWS) data are used to calculate radiative and turbulent components of the net heat flux. The net shortwave flux was calculated using a satellite albedo dataset and the EPA model. The net longwave flux was estimated by Southampton Oceanography Centre (SOC) method. Turbulent fluxes at the air-sea interface were calculated using the COARE 3.0 algorithm. The net heat flux was calculated by using oceanographic and meteorological data of the OWS 'Mike'. The mixed layer depth was estimated for the period since 2002 until 2009 by the 'Mike' data as well. A good correlation between these two parameters has been found. Sensible and latent heat fluxes controlled by surface air temperature/sea surface temperature gradient are the main contributors into net heat flux. Significant correlation was found between heat fluxes variations at the OWS 'Mike' location and sea ice export from the Arctic Ocean.
Energy Technology Data Exchange (ETDEWEB)
Smirnov, A; Alekseev, G [SI ' Arctic and Antarctic Research Institute' , St. Petersburg (Russian Federation); Korablev, A; Esau, I, E-mail: avsmir@aari.nw.r [Nansen Environmental and Remote Sensing Centre, Bergen (Norway)
2010-08-15
The Nordic Seas are an important area of the World Ocean where warm Atlantic waters penetrate far north forming the mild climate of Northern Europe. These waters represent the northern rim of the global thermohaline circulation. Estimates of the relationships between the net heat flux and mixed layer properties in the Nordic Seas are examined. Oceanographic data are derived from the Oceanographic Data Base (ODB) compiled in the Arctic and Antarctic Research Institute. Ocean weather ship 'Mike' (OWS) data are used to calculate radiative and turbulent components of the net heat flux. The net shortwave flux was calculated using a satellite albedo dataset and the EPA model. The net longwave flux was estimated by Southampton Oceanography Centre (SOC) method. Turbulent fluxes at the air-sea interface were calculated using the COARE 3.0 algorithm. The net heat flux was calculated by using oceanographic and meteorological data of the OWS 'Mike'. The mixed layer depth was estimated for the period since 2002 until 2009 by the 'Mike' data as well. A good correlation between these two parameters has been found. Sensible and latent heat fluxes controlled by surface air temperature/sea surface temperature gradient are the main contributors into net heat flux. Significant correlation was found between heat fluxes variations at the OWS 'Mike' location and sea ice export from the Arctic Ocean.
Directory of Open Access Journals (Sweden)
Vinay KUMAR
2011-06-01
Full Text Available This paper proposes an algorithm for direct flux and torque controlled three phase induction motor drive systems. This method is based on control of slip speed and decoupled between amplitude and angle of reference stator flux for determining required stator voltage vector. In this proposes model, integrator unit is not required to generate the reference stator flux angle for calculating required stator voltage vector, hence it eliminates the initial values problems in real time. Within the given sampling time, flux as well as torque errors are controlled by stator voltage vector which is evaluated from reference stator flux. The direct torque control is achieved by reference stator flux angle which is generates from instantaneous slip speed angular frequency and stator flux angular frequency. The amplitude of the reference stator flux is kept constant at rated value. This technique gives better performance in three-phase induction motor than conventional technique. Simulation results for 3hp induction motor drive, for both proposed and conventional techniques, are presented and compared. From the results it is found that the stator current, flux linkage and torque ripples are decreased with proposed technique.
Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090
International Nuclear Information System (INIS)
Haghighat, A.; Lawrence, R.D.
1989-01-01
Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution
Local rectification of heat flux
Pons, M.; Cui, Y. Y.; Ruschhaupt, A.; Simón, M. A.; Muga, J. G.
2017-09-01
We present a chain-of-atoms model where heat is rectified, with different fluxes from the hot to the cold baths located at the chain boundaries when the temperature bias is reversed. The chain is homogeneous except for boundary effects and a local modification of the interactions at one site, the “impurity”. The rectification mechanism is due here to the localized impurity, the only asymmetrical element of the structure, apart from the externally imposed temperature bias, and does not rely on putting in contact different materials or other known mechanisms such as grading or long-range interactions. The effect survives if all interaction forces are linear except the ones for the impurity.
LOFT gamma densitometer background fluxes
International Nuclear Information System (INIS)
Grimesey, R.A.; McCracken, R.T.
1978-01-01
Background gamma-ray fluxes were calculated at the location of the γ densitometers without integral shielding at both the hot-leg and cold-leg primary piping locations. The principal sources for background radiation at the γ densitometers are 16 N activity from the primary piping H 2 O and γ radiation from reactor internal sources. The background radiation was calculated by the point-kernel codes QAD-BSA and QAD-P5A. Reasonable assumptions were required to convert the response functions calculated by point-kernel procedures into the gamma-ray spectrum from reactor internal sources. A brief summary of point-kernel equations and theory is included
Nuclear transmutation by flux compression
International Nuclear Information System (INIS)
Seifritz, W.
2001-01-01
A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de
Dynamics of warped flux compactifications
International Nuclear Information System (INIS)
Shiu, Gary; Underwood, Bret; Torroba, Gonzalo; Douglas, Michael R.
2008-01-01
We discuss the four dimensional effective action for type IIB flux compactifications, and obtain the quadratic terms taking warp effects into account. The analysis includes both the 4-d zero modes and their KK excitations, which become light at large warping. We identify an 'axial' type gauge for the supergravity fluctuations, which makes the four dimensional degrees of freedom manifest. The other key ingredient is the existence of constraints coming from the ten dimensional equations of motion. Applying these conditions leads to considerable simplifications, enabling us to obtain the low energy lagrangian explicitly. In particular, the warped Kaehler potential for metric moduli is computed and it is shown that there are no mixings with the KK fluctuations and the result differs from previous proposals. The four dimensional potential contains a generalization of the Gukov-Vafa-Witten term, plus usual mass terms for KK modes.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Pyrolytic graphite gauge for measuring heat flux
Bunker, Robert C. (Inventor); Ewing, Mark E. (Inventor); Shipley, John L. (Inventor)
2002-01-01
A gauge for measuring heat flux, especially heat flux encountered in a high temperature environment, is provided. The gauge includes at least one thermocouple and an anisotropic pyrolytic graphite body that covers at least part of, and optionally encases the thermocouple. Heat flux is incident on the anisotropic pyrolytic graphite body by arranging the gauge so that the gauge surface on which convective and radiative fluxes are incident is perpendicular to the basal planes of the pyrolytic graphite. The conductivity of the pyrolytic graphite permits energy, transferred into the pyrolytic graphite body in the form of heat flux on the incident (or facing) surface, to be quickly distributed through the entire pyrolytic graphite body, resulting in small substantially instantaneous temperature gradients. Temperature changes to the body can thereby be measured by the thermocouple, and reduced to quantify the heat flux incident to the body.
Minkowski vacuum transitions in (nongeometric) flux compactifications
International Nuclear Information System (INIS)
Herrera-Suarez, Wilberth; Loaiza-Brito, Oscar
2010-01-01
In this work we study the generalization of twisted homology to geometric and nongeometric backgrounds. In the process, we describe the necessary conditions to wrap a network of D-branes on twisted cycles. If the cycle is localized in time, we show how by an instantonic brane mediation, some D-branes transform into fluxes on different backgrounds, including nongeometric fluxes. As a consequence, we show that in the case of a IIB six-dimensional torus compactification on a simple orientifold, the flux superpotential is not invariant by this brane-flux transition, allowing the connection among different Minkowski vacuum solutions. For the case in which nongeometric fluxes are turned on, we also discuss some topological restrictions for the transition to occur. In this context, we show that there are some vacuum solutions protected to change by a brane-flux transition.
rf SQUID system as tunable flux qubit
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, B. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy)]. E-mail: b.ruggiero@cib.na.cnr.it; Granata, C. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Vettoliere, A. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Rombetto, S. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Russo, R. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Russo, M. [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Corato, V. [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-81031 Aversa (Italy); Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy); Silvestrini, P. [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-81031 Aversa (Italy); Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I-80078 Pozzuoli (Italy)
2006-08-21
We present a fully integrated rf SQUID-based system as flux qubit with a high control of the flux transfer function of the superconducting transformer modulating the coupling between the flux qubit and the readout system. The control of the system is possible by including into the superconducting flux transformer a vertical two-Josephson-junctions interferometer (VJI) in which the Josephson current is precisely modulated from a maximum to zero by a transversal magnetic field parallel to the flux transformer plane. The proposed system can be also used in a more general configuration to control the off-diagonal terms in the Hamiltonian of the flux qubit and to turn on and off the coupling between two or more qubits.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
Directory of Open Access Journals (Sweden)
Caroline Colijn
2009-08-01
Full Text Available Metabolism is central to cell physiology, and metabolic disturbances play a role in numerous disease states. Despite its importance, the ability to study metabolism at a global scale using genomic technologies is limited. In principle, complete genome sequences describe the range of metabolic reactions that are possible for an organism, but cannot quantitatively describe the behaviour of these reactions. We present a novel method for modeling metabolic states using whole cell measurements of gene expression. Our method, which we call E-Flux (as a combination of flux and expression, extends the technique of Flux Balance Analysis by modeling maximum flux constraints as a function of measured gene expression. In contrast to previous methods for metabolically interpreting gene expression data, E-Flux utilizes a model of the underlying metabolic network to directly predict changes in metabolic flux capacity. We applied E-Flux to Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB. Key components of mycobacterial cell walls are mycolic acids which are targets for several first-line TB drugs. We used E-Flux to predict the impact of 75 different drugs, drug combinations, and nutrient conditions on mycolic acid biosynthesis capacity in M. tuberculosis, using a public compendium of over 400 expression arrays. We tested our method using a model of mycolic acid biosynthesis as well as on a genome-scale model of M. tuberculosis metabolism. Our method correctly predicts seven of the eight known fatty acid inhibitors in this compendium and makes accurate predictions regarding the specificity of these compounds for fatty acid biosynthesis. Our method also predicts a number of additional potential modulators of TB mycolic acid biosynthesis. E-Flux thus provides a promising new approach for algorithmically predicting metabolic state from gene expression data.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Fast heat flux modulation at the nanoscale
van Zwol, P. J.; Joulain, K.; Abdallah, P. Ben; Greffet, J. J.; Chevrier, J.
2011-01-01
We introduce a new concept for electrically controlled heat flux modulation. A flux contrast larger than 10 dB is expected with switching time on the order of tens of nanoseconds. Heat flux modulation is based on the interplay between radiative heat transfer at the nanoscale and phase change materials. Such large contrasts are not obtainable in solids, or in far field. As such this opens up new horizons for temperature modulation and actuation at the nanoscale.
Heat Flux Inhibition by Whistlers: Experimental Confirmation
International Nuclear Information System (INIS)
Eichler, D.
2002-01-01
Heat flux in weakly magnetized collisionless plasma is, according to theoretical predictions, limited by whistler turbulence that is generated by heat flux instabilities near threshold. Observations of solar wind electrons by Gary and coworkers appear to confirm the limit on heat flux as being roughly the product of the magnetic energy density and the electron thermal velocity, in agreement with prediction (Pistinner and Eichler 1998)
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain
International Nuclear Information System (INIS)
Muhammad, Nauman Malik
2009-02-01
Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more
Study on characteristic points of boiling curve by using wavelet analysis and genetic algorithm
International Nuclear Information System (INIS)
Wei Huiming; Su Guanghui; Qiu Suizheng; Yang Xingbo
2009-01-01
Based on the wavelet analysis theory of signal singularity detection,the critical heat flux (CHF) and minimum film boiling starting point (q min ) of boiling curves can be detected and analyzed by using the wavelet multi-resolution analysis. To predict the CHF in engineering, empirical relations were obtained based on genetic algorithm. The results of wavelet detection and genetic algorithm prediction are consistent with experimental data very well. (authors)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Dimensional reduction of a generalized flux problem
International Nuclear Information System (INIS)
Moroz, A.
1992-01-01
In this paper, a generalized flux problem with Abelian and non-Abelian fluxes is considered. In the Abelian case we shall show that the generalized flux problem for tight-binding models of noninteracting electrons on either 2n- or (2n + 1)-dimensional lattice can always be reduced to an n-dimensional hopping problem. A residual freedom in this reduction enables one to identify equivalence classes of hopping Hamiltonians which have the same spectrum. In the non-Abelian case, the reduction is not possible in general unless the flux tensor factorizes into an Abelian one times are element of the corresponding algebra
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
The Great Deluge Algorithm applied to a nuclear reactor core design optimization problem
International Nuclear Information System (INIS)
Sacco, Wagner F.; Oliveira, Cassiano R.E. de
2005-01-01
The Great Deluge Algorithm (GDA) is a local search algorithm introduced by Dueck. It is an analogy with a flood: the 'water level' rises continuously and the proposed solution must lie above the 'surface' in order to survive. The crucial parameter is the 'rain speed', which controls convergence of the algorithm similarly to Simulated Annealing's annealing schedule. This algorithm is applied to the reactor core design optimization problem, which consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. This problem was previously attacked by the canonical genetic algorithm (GA) and by a Niching Genetic Algorithm (NGA). NGAs were designed to force the genetic algorithm to maintain a heterogeneous population throughout the evolutionary process, avoiding the phenomenon known as genetic drift, where all the individuals converge to a single solution. The results obtained by the Great Deluge Algorithm are compared to those obtained by both algorithms mentioned above. The three algorithms are submitted to the same computational effort and GDA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. One of the great advantages of this algorithm over the GA is that it does not require special operators for discrete optimization. (author)
Invariance algorithms for processing NDE signals
Mandayam, Shreekanth; Udpa, Lalita; Udpa, Satish S.; Lord, William
1996-11-01
Signals that are obtained in a variety of nondestructive evaluation (NDE) processes capture information not only about the characteristics of the flaw, but also reflect variations in the specimen's material properties. Such signal changes may be viewed as anomalies that could obscure defect related information. An example of this situation occurs during in-line inspection of gas transmission pipelines. The magnetic flux leakage (MFL) method is used to conduct noninvasive measurements of the integrity of the pipe-wall. The MFL signals contain information both about the permeability of the pipe-wall and the dimensions of the flaw. Similar operational effects can be found in other NDE processes. This paper presents algorithms to render NDE signals invariant to selected test parameters, while retaining defect related information. Wavelet transform based neural network techniques are employed to develop the invariance algorithms. The invariance transformation is shown to be a necessary pre-processing step for subsequent defect characterization and visualization schemes. Results demonstrating the successful application of the method are presented.
ALFA: an automated line fitting algorithm
Wesson, R.
2016-03-01
I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Differential harmony search algorithm to optimize PWRs loading pattern
Energy Technology Data Exchange (ETDEWEB)
Poursalehi, N., E-mail: npsalehi@yahoo.com [Engineering Department, Shahid Beheshti University, G.C, P.O.Box: 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A. [Engineering Department, Shahid Beheshti University, G.C, P.O.Box: 1983963113, Tehran (Iran, Islamic Republic of)
2013-04-15
Highlights: ► Exploit of DHS algorithm in LP optimization reveals its flexibility, robustness and reliability. ► Upshot of our experiments with DHS shows that the search approach to optimal LP is quickly. ► On the average, the final band width of DHS fitness values is narrow relative to HS and GHS. -- Abstract: The objective of this work is to develop a core loading optimization technique using differential harmony search algorithm in the context of obtaining an optimal configuration of fuel assemblies in pressurized water reactors. To implement and evaluate the proposed technique, differential harmony search nodal expansion package for 2-D geometry, DHSNEP-2D, is developed. The package includes two modules; in the first modules differential harmony search (DHS) is implemented and nodal expansion code which solves two dimensional-multi group neutron diffusion equations using fourth degree flux expansion with one node per a fuel assembly is in the second module. For evaluation of DHS algorithm, classical harmony search (HS) and global-best harmony search (GHS) algorithms are also included in DHSNEP-2D in order to compare the outcome of techniques together. For this purpose, two PWR test cases have been investigated to demonstrate the DHS algorithm capability in obtaining near optimal loading pattern. Results show that the convergence rate of DHS and execution times are quite promising and also is reliable for the fuel management operation. Moreover, numerical results show the good performance of DHS relative to other competitive algorithms such as genetic algorithm (GA), classical harmony search (HS) and global-best harmony search (GHS) algorithms.
Differential harmony search algorithm to optimize PWRs loading pattern
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Exploit of DHS algorithm in LP optimization reveals its flexibility, robustness and reliability. ► Upshot of our experiments with DHS shows that the search approach to optimal LP is quickly. ► On the average, the final band width of DHS fitness values is narrow relative to HS and GHS. -- Abstract: The objective of this work is to develop a core loading optimization technique using differential harmony search algorithm in the context of obtaining an optimal configuration of fuel assemblies in pressurized water reactors. To implement and evaluate the proposed technique, differential harmony search nodal expansion package for 2-D geometry, DHSNEP-2D, is developed. The package includes two modules; in the first modules differential harmony search (DHS) is implemented and nodal expansion code which solves two dimensional-multi group neutron diffusion equations using fourth degree flux expansion with one node per a fuel assembly is in the second module. For evaluation of DHS algorithm, classical harmony search (HS) and global-best harmony search (GHS) algorithms are also included in DHSNEP-2D in order to compare the outcome of techniques together. For this purpose, two PWR test cases have been investigated to demonstrate the DHS algorithm capability in obtaining near optimal loading pattern. Results show that the convergence rate of DHS and execution times are quite promising and also is reliable for the fuel management operation. Moreover, numerical results show the good performance of DHS relative to other competitive algorithms such as genetic algorithm (GA), classical harmony search (HS) and global-best harmony search (GHS) algorithms
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Reactivity anomaly surveillance in the Fast Flux Test Facility through cycle 3
International Nuclear Information System (INIS)
Knutson, B.J.; Harris, R.A.
1984-08-01
The technique for monitoring core reactivity during power operation used at the Fast Flux Test Facility (FFTF) is described. This technique relies on comparing predicted to measured rod positions to detect any anomalous (or unpredicted) core reactivity changes. It is implemented on the Plant Data System (PDS) computer and thus provides rapid indication of any abnormal core conditions. The prediction algorithms use thermal-hydraulic, control rod position and neutron flux sensor information to predict the core reactivity state. Initial results of using this technique based mainly on theoretical formulations is presented. The results show that the reactivity changes due to increasing reactor power (power defect) and burnup of the fuel were within approx. 16% of predicted values. To increase the sensitivity and accuracy of this technique, the prediction algorithms were calibrated to actual operating data. The work of calibrating this technique and the results of using the calibrated technique up through the third full operating cycle are summarized
Magnetic flux generator for balanced membrane loudspeaker
DEFF Research Database (Denmark)
Rehder, Jörg; Rombach, Pirmin; Hansen, Ole
2002-01-01
This paper reports the development of a magnetic flux generator with an application in a hearing aid loudspeaker produced in microsystem technology (MST). The technology plans for two different designs for the magnetic flux generator utilizing a softmagnetic substrate or electroplated Ni...
EL-2 reactor: Thermal neutron flux distribution
International Nuclear Information System (INIS)
Rousseau, A.; Genthon, J.P.
1958-01-01
The flux distribution of thermal neutrons in EL-2 reactor is studied. The reactor core and lattices are described as well as the experimental reactor facilities, in particular, the experimental channels and special facilities. The measurement shows that the thermal neutron flux increases in the central channel when enriched uranium is used in place of natural uranium. However the thermal neutron flux is not perturbed in the other reactor channels by the fuel modification. The macroscopic flux distribution is measured according the radial positioning of fuel rods. The longitudinal neutron flux distribution in a fuel rod is also measured and shows no difference between enriched and natural uranium fuel rods. In addition, measurements of the flux distribution have been effectuated for rods containing other material as steel or aluminium. The neutron flux distribution is also studied in all the experimental channels as well as in the thermal column. The determination of the distribution of the thermal neutron flux in all experimental facilities, the thermal column and the fuel channels has been made with a heavy water level of 1825 mm and is given for an operating power of 1000 kW. (M.P.)
Neutron flux measurement by mobile detectors
International Nuclear Information System (INIS)
Verchain, M.
1987-01-01
Various incore instrumentation systems and their technological evolution are first reviewed. Then, for 1300 MWe PWR nuclear power plant, temperature and neutron flux measurement are described. Mobile fission chambers, with their large measuring range and accurate location allow a good knowledge of the core. Other incore measures are possible because of flux detector thimble tubes inserted in the reactor core [fr
Anthropogenic heat flux estimation from space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmond, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2016-01-01
H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts
ANthropogenic heat FLUX estimation from Space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmong, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mi, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2017-01-01
The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the
FILAMENT INTERACTION MODELED BY FLUX ROPE RECONNECTION
International Nuclear Information System (INIS)
Toeroek, T.; Chandra, R.; Pariat, E.; Demoulin, P.; Schmieder, B.; Aulanier, G.; Linton, M. G.; Mandrini, C. H.
2011-01-01
Hα observations of solar active region NOAA 10501 on 2003 November 20 revealed a very uncommon dynamic process: during the development of a nearby flare, two adjacent elongated filaments approached each other, merged at their middle sections, and separated again, thereby forming stable configurations with new footpoint connections. The observed dynamic pattern is indicative of 'slingshot' reconnection between two magnetic flux ropes. We test this scenario by means of a three-dimensional zero β magnetohydrodynamic simulation, using a modified version of the coronal flux rope model by Titov and Demoulin as the initial condition for the magnetic field. To this end, a configuration is constructed that contains two flux ropes which are oriented side-by-side and are embedded in an ambient potential field. The choice of the magnetic orientation of the flux ropes and of the topology of the potential field is guided by the observations. Quasi-static boundary flows are then imposed to bring the middle sections of the flux ropes into contact. After sufficient driving, the ropes reconnect and two new flux ropes are formed, which now connect the former adjacent flux rope footpoints of opposite polarity. The corresponding evolution of filament material is modeled by calculating the positions of field line dips at all times. The dips follow the morphological evolution of the flux ropes, in qualitative agreement with the observed filaments.
Increased heat fluxes near a forest edge
Klaassen, W; van Breugel, PB; Moors, EJ; Nieveen, JP
2002-01-01
Observations of sensible and latent heat flux above forest downwind of a forest edge show these fluxes to be larger than the available energy over the forest. The enhancement averages to 56 W m(-2), or 16% of the net radiation, at fetches less than 400 m, equivalent to fetch to height ratios less
Increased heat fluxes near a forest edge
Klaassen, W.; Breugel, van P.B.; Moors, E.J.; Nieveen, J.P.
2002-01-01
Observations of sensible and latent heat flux above forest downwind of a forest edge show these fluxes to be larger than the available energy over the forest. The enhancement averages to 56 W mm2, or 16 f the net radiation, at fetches less than 400 m, equivalent to fetch to height ratios less than
Initiation of CMEs by Magnetic Flux Emergence
Indian Academy of Sciences (India)
The initiation of solar Coronal Mass Ejections (CMEs) is studied in the framework of numerical magnetohydrodynamics (MHD). The initial CME model includes a magnetic flux rope in spherical, axisymmetric geometry. The initial configuration consists of a magnetic flux rope embedded in a gravitationally stratified solar ...
Crystal growth of emerald by flux method
International Nuclear Information System (INIS)
Inoue, Mikio; Narita, Eiichi; Okabe, Taijiro; Morishita, Toshihiko.
1979-01-01
Emerald crystals have been formed in two binary fluxes of Li 2 O-MoO 2 and Li 2 O-V 2 O 5 using the slow cooling method and the temperature gradient method under various conditions. In the flux of Li 2 O-MoO 3 carried out in the range of 2 -- 5 of molar ratios (MoO 3 /Li 2 O), emerald was crystallized in the temperature range from 750 to 950 0 C, and the suitable crystallization conditions were found to be the molar ratio of 3 -- 4 and the temperature about 900 0 C. In the flux of Li 2 O-V 2 O 5 carried out in the range of 1.7 -- 5 of molar ratios (V 2 O 5 /Li 2 O), emerald was crystallized in the temperature range from 900 to 1150 0 . The suitable crystals were obtained at the molar ratio of 3 and the temperature range of 1000 -- 1100 0 C. The crystallization temperature rised with an increase in the molar ratio of the both fluxes. The emeralds grown in two binary fluxes were transparent green, having the density of 2.68, the refractive index of 1.56, and the two distinct bands in the visible spectrum at 430 and 600nm. The emerald grown in Li 2 O-V 2 O 5 flux was more bluish green than that grown in Li 2 O-MoO 3 flux. The size of the spontaneously nucleated emerald grown in the former flux was larger than the latter, when crystallized by the slow cooling method. As for the solubility of beryl in the two fluxes, Li 2 O-V 2 O 5 flux was superior to Li 2 O-MoO 3 flux whose small solubility of SiO 2 caused an experimental problem to the temperature gradient method. The suitability of the two fluxes for the crystal growth of emerald by the flux method was discussed from the view point of various properties of above-mentioned two fluxes. (author)
Flux Modulation in the Electrodynamic Loudspeaker
DEFF Research Database (Denmark)
Halvorsen, Morten; Tinggaard, Carsten; Agerkvist, Finn T.
2015-01-01
This paper discusses the effect of flux modulation in the electrodynamic loudspeaker with main focus on the effect on the force factor. A measurement setup to measure the AC flux modulation with static voice coil is explained and the measurements shows good consistency with FEA simulations....... Measurements of the generated AC flux modulation shows, that eddy currents are the main source to magnetic losses in form of phase lag and amplitude changes. Use of a copper cap shows a decrease in flux modulation amplitude at the expense of increased power losses. Finally, simulations show...... that there is a high dependency between the generated AC flux modulation from the voice coil and the AC force factor change....
Plasma crowbars in cylindrical flux compression experiments
International Nuclear Information System (INIS)
Suter, L.J.
1979-01-01
We have done a series of one- and two-dimensional calculations of hard-core Z-pinch flux compression experiments in order to study the effect of a plasma on these systems. These calculations show that including a plasma can reduce the amount of flux lost during the compression. Flux losses to the outer wall of such experiments can be greatly reduced by a plasma conducting sheath which forms along the wall. This conducting sheath consists of a cold, dense high β, unmagnetized plasma which has enough pressure to balance a large field gradient. Flux which is lost into the center conductor is not effectively stopped by this plasma sheath until late in the implosion, at which time a layer similar to the one formed at the outer wall is created. Two-dimensionl simulations show that flux losses due to arching along the sliding contact of the experiment can be effectively stopped by the formation of a plasma conducting sheath
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
13C metabolic flux analysis: optimal design of isotopic labeling experiments.
Antoniewicz, Maciek R
2013-12-01
Measuring fluxes by 13C metabolic flux analysis (13C-MFA) has become a key activity in chemical and pharmaceutical biotechnology. Optimal design of isotopic labeling experiments is of central importance to 13C-MFA as it determines the precision with which fluxes can be estimated. Traditional methods for selecting isotopic tracers and labeling measurements did not fully utilize the power of 13C-MFA. Recently, new approaches were developed for optimal design of isotopic labeling experiments based on parallel labeling experiments and algorithms for rational selection of tracers. In addition, advanced isotopic labeling measurements were developed based on tandem mass spectrometry. Combined, these approaches can dramatically improve the quality of 13C-MFA results with important applications in metabolic engineering and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development of an Axial Flux MEMS BLDC Micromotor with Increased Efficiency and Power Density
Directory of Open Access Journals (Sweden)
Xiaofeng Ding
2015-06-01
Full Text Available This paper presents a rigorous design and optimization of an axial flux microelectromechanical systems (MEMS brushless dc (BLDC micromotor with dual rotor improving both efficiency and power density with an external diameter of only around 10 mm. The stator is made of two layers of windings by MEMS technology. The rotor is developed by film permanent magnets assembled over the rotor yoke. The characteristics of the MEMS micromotor are analyzed and modeled through a 3-D magnetic equivalent circuit (MEC taking the leakage flux and fringing effect into account. Such a model yields a relatively accurate prediction of the flux in the air gap, back electromotive force (EMF and electromagnetic torque, whilst being computationally efficient. Based on 3-D MEC model the multi-objective firefly algorithm (MOFA is developed for the optimal design of this special machine. Both 3-D finite element (FE simulation and experiments are employed to validate the MEC model and MOFA optimization design.
Computational Platform for Flux Analysis Using 13C-Label Tracing- Phase I SBIR Final Report
Energy Technology Data Exchange (ETDEWEB)
Van Dien, Stephen J.
2005-04-12
Isotopic label tracing is a powerful experimental technique that can be combined with metabolic models to quantify metabolic fluxes in an organism under a particular set of growth conditions. In this work we constructed a genome-scale metabolic model of Methylobacterium extorquens, a facultative methylotroph with potential application in the production of useful chemicals from methanol. A series of labeling experiments were performed using 13C-methanol, and the resulting distribution of labeled carbon in the proteinogenic amino acids was determined by mass spectrometry. Algorithms were developed to analyze this data in context of the metabolic model, yielding flux distributions for wild-type and several engineered strains of M. extorquens. These fluxes were compared to those predicted by model simulation alone, and also integrated with microarray data to give an improved understanding of the metabolic physiology of this organism.
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
Iterative schemes for parallel Sn algorithms in a shared-memory computing environment
International Nuclear Information System (INIS)
Haghighat, A.; Hunter, M.A.; Mattis, R.E.
1995-01-01
Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency
An efficient algorithm for 3D space time kinetics simulations for large PHWRs
International Nuclear Information System (INIS)
Jain, Ishi; Fernando, M.P.S.; Kumar, A.N.
2012-01-01
In nuclear reactor physics and allied areas like shielding, various forms of neutron transport equation or its approximation namely the diffusion equation have to be solved to estimate neutron flux distribution. This paper presents an efficient algorithm yielding accurate results along with promising gain in computational work. (author)
PWR loading pattern optimization using Harmony Search algorithm
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Numerical results reveal that the HS method is reliable. ► The great advantage of HS is significant gain in computational cost. ► On the average, the final band width of search fitness values is narrow. ► Our experiments show that the search approaches the optimal value fast. - Abstract: In this paper a core reloading technique using Harmony Search, HS, is presented in the context of finding an optimal configuration of fuel assemblies, FA, in pressurized water reactors. To implement and evaluate the proposed technique a Harmony Search along Nodal Expansion Code for 2-D geometry, HSNEC2D, is developed to obtain nearly optimal arrangement of fuel assemblies in PWR cores. This code consists of two sections including Harmony Search algorithm and Nodal Expansion modules using fourth degree flux expansion which solves two dimensional-multi group diffusion equations with one node per fuel assembly. Two optimization test problems are investigated to demonstrate the HS algorithm capability in converging to near optimal loading pattern in the fuel management field and other subjects. Results, convergence rate and reliability of the method are quite promising and show the HS algorithm performs very well and is comparable to other competitive algorithms such as Genetic Algorithm and Particle Swarm Intelligence. Furthermore, implementation of nodal expansion technique along HS causes considerable reduction of computational time to process and analysis optimization in the core fuel management problems
Directory of Open Access Journals (Sweden)
M. J. Smith
2018-04-01
Full Text Available Direct measurements of marine dimethylsulfide (DMS fluxes are sparse, particularly in the Southern Ocean. The Surface Ocean Aerosol Production (SOAP voyage in February–March 2012 examined the distribution and flux of DMS in a biologically active frontal system in the southwest Pacific Ocean. Three distinct phytoplankton blooms were studied with oceanic DMS concentrations as high as 25 nmol L−1. Measurements of DMS fluxes were made using two independent methods: the eddy covariance (EC technique using atmospheric pressure chemical ionization–mass spectrometry (API-CIMS and the gradient flux (GF technique from an autonomous catamaran platform. Catamaran flux measurements are relatively unaffected by airflow distortion and are made close to the water surface, where gas gradients are largest. Flux measurements were complemented by near-surface hydrographic measurements to elucidate physical factors influencing DMS emission. Individual DMS fluxes derived by EC showed significant scatter and, at times, consistent departures from the Coupled Ocean–Atmosphere Response Experiment gas transfer algorithm (COAREG. A direct comparison between the two flux methods was carried out to separate instrumental effects from environmental effects and showed good agreement with a regression slope of 0.96 (r2 = 0.89. A period of abnormal downward atmospheric heat flux enhanced near-surface ocean stratification and reduced turbulent exchange, during which GF and EC transfer velocities showed good agreement but modelled COAREG values were significantly higher. The transfer velocity derived from near-surface ocean turbulence measurements on a spar buoy compared well with the COAREG model in general but showed less variation. This first direct comparison between EC and GF fluxes of DMS provides confidence in compilation of flux estimates from both techniques, as well as in the stable periods when the observations are not well predicted by the COAREG
Smith, Murray J.; Walker, Carolyn F.; Bell, Thomas G.; Harvey, Mike J.; Saltzman, Eric S.; Law, Cliff S.
2018-04-01
Direct measurements of marine dimethylsulfide (DMS) fluxes are sparse, particularly in the Southern Ocean. The Surface Ocean Aerosol Production (SOAP) voyage in February-March 2012 examined the distribution and flux of DMS in a biologically active frontal system in the southwest Pacific Ocean. Three distinct phytoplankton blooms were studied with oceanic DMS concentrations as high as 25 nmol L-1. Measurements of DMS fluxes were made using two independent methods: the eddy covariance (EC) technique using atmospheric pressure chemical ionization-mass spectrometry (API-CIMS) and the gradient flux (GF) technique from an autonomous catamaran platform. Catamaran flux measurements are relatively unaffected by airflow distortion and are made close to the water surface, where gas gradients are largest. Flux measurements were complemented by near-surface hydrographic measurements to elucidate physical factors influencing DMS emission. Individual DMS fluxes derived by EC showed significant scatter and, at times, consistent departures from the Coupled Ocean-Atmosphere Response Experiment gas transfer algorithm (COAREG). A direct comparison between the two flux methods was carried out to separate instrumental effects from environmental effects and showed good agreement with a regression slope of 0.96 (r2 = 0.89). A period of abnormal downward atmospheric heat flux enhanced near-surface ocean stratification and reduced turbulent exchange, during which GF and EC transfer velocities showed good agreement but modelled COAREG values were significantly higher. The transfer velocity derived from near-surface ocean turbulence measurements on a spar buoy compared well with the COAREG model in general but showed less variation. This first direct comparison between EC and GF fluxes of DMS provides confidence in compilation of flux estimates from both techniques, as well as in the stable periods when the observations are not well predicted by the COAREG model.
Multisensor data fusion algorithm development
Energy Technology Data Exchange (ETDEWEB)
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
FluxVisualizer, a Software to Visualize Fluxes through Metabolic Networks
Directory of Open Access Journals (Sweden)
Tim Daniel Rose
2018-04-01
Full Text Available FluxVisualizer (Version 1.0, 2017, freely available at https://fluxvisualizer.ibgc.cnrs.fr is a software to visualize fluxes values on a scalable vector graphic (SVG representation of a metabolic network by colouring or increasing the width of reaction arrows of the SVG file. FluxVisualizer does not aim to draw metabolic networks but to use a customer’s SVG file allowing him to exploit his representation standards with a minimum of constraints. FluxVisualizer is especially suitable for small to medium size metabolic networks, where a visual representation of the fluxes makes sense. The flux distribution can either be an elementary flux mode (EFM, a flux balance analysis (FBA result or any other flux distribution. It allows the automatic visualization of a series of pathways of the same network as is needed for a set of EFMs. The software is coded in python3 and provides a graphical user interface (GUI and an application programming interface (API. All functionalities of the program can be used from the API and the GUI and allows advanced users to add their own functionalities. The software is able to work with various formats of flux distributions (Metatool, CellNetAnalyzer, COPASI and FAME export files as well as with Excel files. This simple software can save a lot of time when evaluating fluxes simulations on a metabolic network.
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
International Nuclear Information System (INIS)
Park, Tongkyu; Yang, Won Sik; Kim, Sang-Ji
2017-01-01
Highlights: • An enhanced search algorithm for charged fuel enrichment was developed for equilibrium cycle analysis with REBUS-3. • The new search algorithm is not sensitive to the user-specified initial guesses. • The new algorithm reduces the computational time by a factor of 2–3. - Abstract: This paper presents an enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3. The current enrichment search algorithm of REBUS-3 takes a large number of iterations to yield a converged solution or even terminates without a converged solution when the user-specified initial guesses are far from the solution. To resolve the convergence problem and to reduce the computational time, an enhanced search algorithm was developed. The enhanced algorithm is based on the idea of minimizing the number of enrichment estimates by allowing drastic enrichment changes and by optimizing the current search algorithm of REBUS-3. Three equilibrium cycle problems with recycling, without recycling and of high discharge burnup were defined and a series of sensitivity analyses were performed with a wide range of user-specified initial guesses. Test results showed that the enhanced search algorithm is able to produce a converged solution regardless of the initial guesses. In addition, it was able to reduce the number of flux calculations by a factor of 2.9, 1.8, and 1.7 for equilibrium cycle problems with recycling, without recycling, and of high discharge burnup, respectively, compared to the current search algorithm.
Adaptation of Flux-Corrected Transport Algorithms for Modeling Dusty Flows.
1983-12-20
Defense Comunications Agency Olcy Attn XLA Washington, DC 20305 01cy Attn nTW-2 (ADR CNW D I: Attn Code 240 for) Olcy Attn NL-STN O Library Olcy Attn...Library Olcy Attn TIC-Library Olcy Attn R Welch Olcy Attn M Johnson Los Alamos National Scientific Lab. Mail Station 5000 Information Science, Inc. P
Lee, L. C.; Ma, Z. W.; Fu, Z. F.; Otto, A.
1993-01-01
A mechanism for the formation of fossil flux transfer events and the low-level boundary layer within the framework of multiple X-line reconnection is proposed. Attention is given to conditions for which the bulk of magnetic flux in a flux rope of finite extent has a simple magnetic topology, where the four possible connections of magnetic field lines are: IMF to MSP, MSP to IMF, IMF to IMF, and MSP to MSP. For a sufficient relative shift of the X lines, magnetic flux may enter a flux rope from the magnetosphere and exit into the magnetosphere. This process leads to the formation of magnetic flux ropes which contain a considerable amount of magnetosheath plasma on closed magnetospheric field lines. This process is discussed as a possible explanation for the formation of fossil flux transfer events in the magnetosphere and the formation of the low-latitude boundary layer.
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
One improved LSB steganography algorithm
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Graph Algorithm Animation with Grrr
Rodgers, Peter; Vidal, Natalia
2000-01-01
We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Neutron flux enhancement in the NRAD reactor
International Nuclear Information System (INIS)
Weeks, A.A.; Heidel, C.C.; Imel, G.R.
1988-01-01
In 1987 a series of experiments were conducted at the NRAD reactor facility at Argonne National Laboratory - West (ANL-W) to investigate the possibility of increasing the thermal neutron content at the end of the reactor's east beam tube through the use of hydrogenous flux traps. It was desired to increase the thermal flux for a series of experiments to be performed in the east radiography cell, in which the enhanced flux was required in a relatively small volume. Hence, it was feasible to attempt to focus the cross section of the beam to a smaller area. Two flux traps were constructed from unborated polypropylene and tested to determine their effectiveness. Both traps were open to the entire cross-sectional area of the neutron beam (as it emerges from the wall and enters the beam room). The sides then converged such that at the end of the trap the beam would be 'focused' to a greater intensity. The differences in the two flux traps were primarily in length, and hence angle to the beam as the inlet and outlet cross-sectional areas were held constant. The experiments have contributed to the design of a flux trap in which a thermal flux of nearly 10 9 was obtained, with an enhancement of 6.61
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS
Directory of Open Access Journals (Sweden)
G. Sithole
2015-05-01
Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.
Predicting radon flux from uranium mill tailings
International Nuclear Information System (INIS)
Freeman, H.D.; Hartley, J.N.
1983-11-01
Pacific Northwest Laboratory (PNL), under contract to the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action Project (UMTRAP) office, is developing technology for the design of radon barriers for uranium mill tailings piles. To properly design a radon cover for a particular tailings pile, the radon flux emanating from the bare tailings must be known. The tailings characteristics required to calculate the radon flux include radium-226 content, emanating power, bulk density, and radon diffusivity. This paper presents theoretical and practical aspects of estimating the radon flux from an uranium tailings pile. Results of field measurements to verify the calculation methodology are also discussed. 24 references, 4 figures, 4 tables
A time-varying magnetic flux concentrator
International Nuclear Information System (INIS)
Kibret, B; Premaratne, M; Lewis, P M; Thomson, R; Fitzgerald, P B
2016-01-01
It is known that diverse technological applications require the use of focused magnetic fields. This has driven the quest for controlling the magnetic field. Recently, the principles in transformation optics and metamaterials have allowed the realization of practical static magnetic flux concentrators. Extending such progress, here, we propose a time-varying magnetic flux concentrator cylindrical shell that uses electric conductors and ferromagnetic materials to guide magnetic flux to its center. Its performance is discussed based on finite-element simulation results. Our proposed design has potential applications in magnetic sensors, medical devices, wireless power transfer, and near-field wireless communications. (paper)
Energy flux correlations and moving mirrors
International Nuclear Information System (INIS)
Ford, L.H.; Roman, Thomas A.
2004-01-01
We study the quantum stress tensor correlation function for a massless scalar field in a flat two-dimensional spacetime containing a moving mirror. We construct the correlation functions for right-moving and left-moving fluxes for an arbitrary trajectory, and then specialize them to the case of a mirror trajectory for which the expectation value of the stress tensor describes a pair of delta-function pulses, one of negative energy and one of positive energy. The flux correlation function describes the fluctuations around this mean stress tensor, and reveals subtle changes in the correlations between regions where the mean flux vanishes
Eddy Correlation Flux Measurement System (ECOR) Handbook
Energy Technology Data Exchange (ETDEWEB)
Cook, DR
2011-01-31
The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration.
Exponentially tapered Josephson flux-flow oscillator
DEFF Research Database (Denmark)
Benabdallah, A.; Caputo, J. G.; Scott, Alwyn C.
1996-01-01
We introduce an exponentially tapered Josephson flux-flow oscillator that is tuned by applying a bias current to the larger end of the junction. Numerical and analytical studies show that above a threshold level of bias current the static solution becomes unstable and gives rise to a train...... of fluxons moving toward the unbiased smaller end, as in the standard flux-flow oscillator. An exponentially shaped junction provides several advantages over a rectangular junction including: (i) smaller linewidth, (ii) increased output power, (iii) no trapped flux because of the type of current injection...
Diamagnetic flux measurement in Aditya tokamak
International Nuclear Information System (INIS)
Kumar, Sameer; Jha, Ratneshwar; Lal, Praveen; Hansaliya, Chandresh; Gopalkrishna, M. V.; Kulkarni, Sanjay; Mishra, Kishore
2010-01-01
Measurements of diamagnetic flux in Aditya tokamak for different discharge conditions are reported for the first time. The measured diamagnetic flux in a typical discharge is less than 0.6 mWb and therefore it has required careful compensation for various kinds of pick-ups. The hardware and software compensations employed in this measurement are described. We introduce compensation of a pick-up due to plasma current of less than 20 kA in short duration discharges, in which plasma pressure gradient is supposed to be negligible. The flux measurement during radio frequency heating is also presented in order to validate compensation.
High heat flux cooling for accelerator targets
International Nuclear Information System (INIS)
Silverman, I.; Nagler, A.
2002-01-01
Accelerator targets, both for radioisotope production and for high neutron flux sources generate very high thermal power in the target material which absorbs the particles beam. Generally, the geometric size of the targets is very small and the power density is high. The design of these targets requires dealing with very high heat fluxes and very efficient heat removal techniques in order to preserve the integrity of the target. Normal heat fluxes from these targets are in the order of 1 kw/cm 2 and may reach levels of an order of magnitude higher
Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.
2017-12-01
Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Gu, Yingxin; Howard, Daniel M.; Wylie, Bruce K.; Zhang, Li
2012-01-01
Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.
Nuclear reactors project optimization based on neural network and genetic algorithm
International Nuclear Information System (INIS)
Pereira, Claudio M.N.A.; Schirru, Roberto; Martinez, Aquilino S.
1997-01-01
This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs
Neutron flux measurements in PUSPATI Triga Reactor
International Nuclear Information System (INIS)
Gui Ah Auu; Mohamad Amin Sharifuldin Salleh; Mohamad Ali Sufi.
1983-01-01
Neutron flux measurement in the PUSPATI TRIGA Reactor (PTR) was initiated after its commissioning on 28 June 1982. Initial measured thermal neutron flux at the bottom of the rotary specimen rack (rotating) and in-core pneumatic terminus were 3.81E+11 n/cm 2 sec and 1.10E+12n/cm 2 sec respectively at 100KW. Work to complete the neutron flux data are still going on. The cadmium ratio, thermal and epithermal neutron flux are measured in the reactor core, rotary specimen rack, in-core pneumatic terminus and thermal column. Bare and Cadmium covered gold foils and wires are used for the above measurement. The activities of the irradiated gold foils and wires are determined using Ge(Li) and hyperpure germinium detectors. (author)
Pulse power applications of flux compression generators
International Nuclear Information System (INIS)
Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.
1981-01-01
Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources
Modeling radon flux from the earth's surface
International Nuclear Information System (INIS)
Schery, S.D.; Wasiolek, M.A.
1998-01-01
We report development of a 222 Rn flux density model and its use to estimate the 222 Rn flux density over the earth's land surface. The resulting maps are generated on a grid spacing of 1 0 x 1 0 using as input global data for soil radium, soil moisture, and surface temperature. While only a first approximation, the maps suggest a significant regional variation (a factor of three is not uncommon) and a significant seasonal variation (a factor of two is not uncommon) in 222 Rn flux density over the earth's surface. The estimated average global flux density from ice-free land is 34 ± 9 mBq m -2 s -1 . (author)
400 Area/Fast Flux Test Facility
Federal Laboratory Consortium — The 400 Area at Hanford is home primarily to the Fast Flux Test Facility (FFTF), a DOE-owned, formerly operating, 400-megawatt (thermal) liquid-metal (sodium)-cooled...
Flux Tube Dynamics in the Dual Superconductor
International Nuclear Information System (INIS)
Lampert, M.; Svetitsky, B.
1999-01-01
We have studied plasma oscillations in a flux tube created in a dual superconductor. The theory contains an Abelian gauge field coupled magnetically to a Higgs field that confines electric charge via the dual Meissner effect. Starting from a static flux tube configuration, with electric charges at either end, we release a fluid of electric charges in the system that accelerate and screen the electric field. The weakening of the electric field allows the flux tube to collapse, and the inertia of the charges forces it open again. We investigate both Type I and Type II superconductors, with plasma frequencies both above and below the threshold for radiation into the Higgs vacuum. (The parameters appropriate to QCD are in the Type II regime; the plasma frequency depends on the mass taken for the fluid constituents.) The coupling of the plasma oscillations to the Higgs field making up the flux tube is the main new feature in our work
High Flux Isotope Reactor technical specifications
International Nuclear Information System (INIS)
1985-11-01
This report gives technical specifications for the High Flux Isotope Reactor (HFIR) on the following: safety limits and limiting safety system settings; limiting conditions for operation; surveillance requirements; design features; and administrative controls
Heat flux microsensor measurements and calibrations
Terrell, James P.; Hager, Jon M.; Onishi, Shinzo; Diller, Thomas E.
1992-01-01
A new thin-film heat flux gage has been fabricated specifically for severe high temperature operation using platinum and platinum-10 percent rhodium for the thermocouple elements. Radiation calibrations of this gage were performed at the AEDC facility over the available heat flux range (approx. 1.0 - 1,000 W/cu cm). The gage output was linear with heat flux with a slight increase in sensitivity with increasing surface temperature. Survivability of gages was demonstrated in quench tests from 500 C into liquid nitrogen. Successful operation of gages to surface temperatures of 750 C has been achieved. No additional cooling of the gages is required because the gages are always at the same temperature as the substrate material. A video of oxyacetylene flame tests with real-time heat flux and temperature output is available.
High energy neutrinos: sources and fluxes
Energy Technology Data Exchange (ETDEWEB)
Stanev, Todor [Bartol Research Institute, Department of Physics and Astronomy, University of Delaware, Newark DE 19716 (United States)
2006-05-15
We discuss briefly the potential sources of high energy astrophysical neutrinos and show estimates of the neutrino fluxes that they can produce. A special attention is paid to the connection between the highest energy cosmic rays and astrophysical neutrinos.
Rotating flux compressor for energy conversion
International Nuclear Information System (INIS)
Chowdhuri, P.; Linton, T.W.; Phillips, J.A.
1983-01-01
The rotating flux compressor (RFC) converts rotational kinetic energy into an electrical output pulse which would have higher energy than the electrical energy initially stored in the compressor. An RFC has been designed in which wedge-shaped rotor blades pass through the air gaps between successive turns of a solenoid, the stator. Magnetic flux is generated by pulsing the stator solenoids when the inductance is a maximum, i.e., when the flux fills the stator-solenoid volume. Connecting the solenoid across a load conserves the flux which is compressed within the small volume surrounding the stator periphery when the rotor blades cut into the free space between the stator plates, creating a minimum-inductance condition. The unique features of this design are: (1) no electrical connections (brushes) to the rotor; (2) no conventional windings; and (3) no maintenance. The device has been tested up to 5000 rpm of rotor speed
Modelling drug flux through microporated skin.
Rzhevskiy, Alexey S; Guy, Richard H; Anissimov, Yuri G
2016-11-10
A simple mathematical equation has been developed to predict drug flux through microporated skin. The theoretical model is based on an approach applied previously to water evaporation through leaf stomata. Pore density, pore radius and drug molecular weight are key model parameters. The predictions of the model were compared with results derived from a simple, intuitive method using porated area alone to estimate the flux enhancement. It is shown that the new approach predicts significantly higher fluxes than the intuitive analysis, with transport being proportional to the total pore perimeter rather than area as intuitively anticipated. Predicted fluxes were in good general agreement with experimental data on drug delivery from the literature, and were quantitatively closer to the measured values than those derived from the intuitive, area-based approach. Copyright © 2016 Elsevier B.V. All rights reserved.
On the Tensorial Nature of Fluxes in Continuous Media.
Stokes, Vijay Kumar; Ramkrishna, Doraiswami
1982-01-01
Argues that mass and energy fluxes in a fluid are vectors. Topics include the stress tensor, theorem for tensor fields, mass flux as a vector, stress as a second order tensor, and energy flux as a tensor. (SK)
sizing of wind powered axial flux permanent magnet alternator using
African Journals Online (AJOL)
user
2016-10-04
Oct 4, 2016 ... Keywords: Wind-Power, Axial flux, Axial Flux Permanent Machines (AFPM), Axial Flux Permanent Magnet ... energy for power generation, a high constraint is the .... arrangements as Single-Rotor Single-Stator Structure.
International Nuclear Information System (INIS)
Grady, M.
1986-01-01
I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs
Quantum algorithms and learning theory
Arunachalam, S.
2018-01-01
This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
A fast fractional difference algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
2014-01-01
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
A Fast Fractional Difference Algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
A Distributed Spanning Tree Algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Tau reconstruction and identification algorithm
Indian Academy of Sciences (India)
CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Algorithms for Decision Tree Construction
Chikalov, Igor
2011-01-01
The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28
A distributed spanning tree algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge
1988-01-01
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...
Global alignment algorithms implementations | Fatumo ...
African Journals Online (AJOL)
In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
Boundary fluxes for non-local diffusion
Cortazar, C.; Elgueta, M.; Rossi, J. D.; Wolanski, N.
2006-01-01
We study a nonlocal diffusion operator in a bounded smooth domain prescribing the flux through the boundary. This problem may be seen as a generalization of the usual Neumann problem for the heat equation. First, we prove existence, uniqueness and a comparison principle. Next, we study the behavior of solutions for some prescribed boundary data including blowing up ones. Finally, we look at a nonlinear flux boundary condition.
Determination flux in the Reactor JEN-1
International Nuclear Information System (INIS)
Manas Diaz, L.; Montes Ponce de leon, J.
1960-01-01
This report summarized several irradiations that have been made to determine the neutron flux distributions in the core of the JEN-1 reactor. Gold foils of 380 μ gr and Mn-Ni (12% de Ni) of 30 mg have been employed. the epithermal flux has been determined by mean of the Cd radio. The resonance integral values given by Macklin and Pomerance have been used. (Author) 9 refs
Vertical Josephson Interferometer for Tunable Flux Qubit
Energy Technology Data Exchange (ETDEWEB)
Granata, C [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Vettoliere, A [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Lisitskiy, M [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Rombetto, S [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Russo, M [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Ruggiero, B [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Corato, V [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy); Russo, R [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy); Silvestrini, P [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy)
2006-06-01
We present a niobium-based Josephson device as prototype for quantum computation with flux qubits. The most interesting feature of this device is the use of a Josephson vertical interferometer to tune the flux qubit allowing the control of the off-diagonal Hamiltonian terms of the system. In the vertical interferometer, the Josephson current is precisely modulated from a maximum to zero with fine control by a small transversal magnetic field parallel to the rf superconducting loop plane.
Self-powered neutron flux detector
International Nuclear Information System (INIS)
Kroon, J.
1979-01-01
A self-powered neutron flux detector having an emitter electrode, at least a major portion of which is, 95 Mo encased in a tubular collector electrode and separated therefrom by dielectric material. The 95 Mo emitter electrode has experimentally shown a 98% prompt response, is primarily sensitive to neutron flux, has adequate sensitivity and has low burn up. Preferably the emitter electrode is molybdenum which has been enriched 75% to 99% by weight with 95 Mo
International Nuclear Information System (INIS)
Cashwell, E.D.; Schrandt, R.G.
1980-01-01
The current state of the art of calculating flux at a point with MCNP is discussed. Various techniques are touched upon, but the main emphasis is on the fast improved version of the once-more-collided flux estimator, which has been modified to treat neutrons thermalized by the free gas model. The method is tested on several problems on interest and the results are presented
Data bank of critical heat flux
International Nuclear Information System (INIS)
Balino, J.L.; Ruival, M.H.
1985-01-01
More than 13.000 measurements of critical heat flux are classified in a data bank. From each experiment the following information can be obtained: cooling medium (light water, freon 12 or freon 21), geometry of the test section and thermalhydraulic parameters. The data management is performed by a computer program called CHFTRAT. A brief study of the influence of different parameters in the critical heat flux is presented, as an example of how to use the program. (M.E.L.) [es
Anisotropic flux pinning in high Tc superconductors
International Nuclear Information System (INIS)
Kolesnik, S.; Igalson, J.; Skoskiewicz, T.; Szymczak, R.; Baran, M.; Pytel, K.; Pytel, B.
1995-01-01
In this paper we present a comparison of the results of FC magnetization measurements on several Pb-Sr-(Y,Ca)-Cu-O crystals representing various levels of flux pinning. The pinning centers in our crystals have been set up during the crystal growth process or introduced by neutron irradiation. Some possible explanations of the observed effects, including surface barrier, flux-center distribution and sample-shape effects, are discussed. ((orig.))
Novel medical image enhancement algorithms
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....
A Novel Radiation Transport Algorithm for Radiography Simulations
International Nuclear Information System (INIS)
Inanc, Feyzi
2004-01-01
The simulations used in the NDE community are becoming more realistic with the introduction of more physics. In this work, we have developed a new algorithm that is capable of representing photon and charged particle fluxes through spherical harmonic expansions in a manner similar to well known discrete ordinates method with the exception that Boltzmann operator is treated through exact integration rather than conventional Legendre expansions. This approach provides a mean to include radiation interactions for higher energy regimes where there are additional physical mechanisms for photons and charged particles
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
Neutron flux distribution forecasting device of reactor
International Nuclear Information System (INIS)
Uematsu, Hitoshi
1991-01-01
A neutron flux distribution is forecast by using current data obtained from a reactor. That is, the device of the present invention comprises (1) a neutron flux monitor disposed in various positions in the reactor, (2) a forecasting means for calculating and forecasting a one-dimensional neutron flux distribution relative to imaginable events by using data obtained from the neutron flux monitor and physical models, and (3) a display means for displaying the results forecast in the forecasting means to a reactor operation console. Since the forecast values for the one-dimensional neutron flux distribution relative to the imaginable events are calculated in the device of the present invention by using data obtained from the neutron flux monitor and the physical models, the data as a base of the calculation are new and the period for calculating the forecast values can be shortened. Accordingly, although there is a worry of providing some errors in the forecast values, they can be utilized sufficiently as reference data. As a result, the reactor can be operated more appropriately. (I.N.)
Direct ecosystem fluxes of volatile organic compounds from oil palms in South-East Asia
Directory of Open Access Journals (Sweden)
P. K. Misztal
2011-09-01
Full Text Available This paper reports the first direct eddy covariance fluxes of reactive biogenic volatile organic compounds (BVOCs from oil palms to the atmosphere using proton-transfer-reaction mass spectrometry (PTR-MS, measured at a plantation in Malaysian Borneo. At midday, net isoprene flux constituted the largest fraction (84 % of all emitted BVOCs measured, at up to 30 mg m^{−2} h^{−1} over 12 days. By contrast, the sum of its oxidation products methyl vinyl ketone (MVK and methacrolein (MACR exhibited clear deposition of 1 mg m^{−2} h^{−1}, with a small average canopy resistance of 230 s m^{−1}. Approximately 15 % of the resolved BVOC flux from oil palm trees could be attributed to floral emissions, which are thought to be the largest reported biogenic source of estragole and possibly also toluene. Although on average the midday volume mixing ratio of estragole exceeded that of toluene by almost a factor of two, the corresponding fluxes of these two compounds were nearly the same, amounting to 0.81 and 0.76 mg m^{−2} h^{−1}, respectively. By fitting the canopy temperature and PAR response of the MEGAN emissions algorithm for isoprene and other emitted BVOCs a basal emission rate of isoprene of 7.8 mg m^{−2} h^{−1} was derived. We parameterise fluxes of depositing compounds using a resistance approach using direct canopy measurements of deposition. Consistent with Karl et al. (2010, we also propose that it is important to include deposition in flux models, especially for secondary oxidation products, in order to improve flux predictions.
Large estragole fluxes from oil palms in Borneo
Directory of Open Access Journals (Sweden)
P. K. Misztal
2010-05-01
Full Text Available During two field campaigns (OP3 and ACES, which ran in Borneo in 2008, we measured large emissions of estragole (methyl chavicol; IUPAC systematic name 1-allyl-4-methoxybenzene; CAS number 140-67-0 in ambient air above oil palm canopies (0.81 mg m^{−2} h^{−1} and 3.2 ppbv for mean midday fluxes and mixing ratios respectively and subsequently from flower enclosures. However, we did not detect this compound at a nearby rainforest. Estragole is a known attractant of the African oil palm weevil (Elaeidobius kamerunicus, which pollinates oil palms (Elaeis guineensis. There has been recent interest in the biogenic emissions of estragole but it is normally not included in atmospheric models of biogenic emissions and atmospheric chemistry despite its relatively high potential for secondary organic aerosol formation from photooxidation and high reactivity with OH radical. We report the first direct canopy-scale measurements of estragole fluxes from tropical oil palms by the virtual disjunct eddy covariance technique and compare them with previously reported data for estragole emissions from Ponderosa pine. Flowers, rather than leaves, appear to be the main source of estragole from oil palms; we derive a global estimate of estragole emissions from oil palm plantations of ~0.5 Tg y^{−1}. The observed ecosystem mean fluxes (0.44 mg m^{−2} h^{−1} and mean ambient volume mixing ratios (3.0 ppbv of estragole are the highest reported so far. The value for midday mixing ratios is not much different from the total average as, unlike other VOCs (e.g. isoprene, the main peak occurred in the evening rather than in the middle of the day. Despite this, we show that the estragole flux can be parameterised using a modified G06 algorithm for emission. However, the model underestimates the afternoon peak even though a similar approach works well for isoprene. Our measurements suggest that this biogenic
Monthly Sea Surface Salinity and Freshwater Flux Monitoring
Ren, L.; Xie, P.; Wu, S.
2017-12-01
Taking advantages of the complementary nature of the Sea Surface Salinity (SSS) measurements from the in-situ (CTDs, shipboard, Argo floats, etc.) and satellite retrievals from Soil Moisture Ocean Salinity (SMOS) satellite of the European Space Agency (ESA), the Aquarius of a joint venture between US and Argentina, and the Soil Moisture Active Passive (SMAP) of national Aeronautics and Space Administration (NASA), a technique is developed at NOAA/NCEP/CPC to construct an analysis of monthly SSS, called the NOAA Blended Analysis of Sea-Surface Salinity (BASS). The algorithm is a two-steps approach, i.e. to remove the bias in the satellite data through Probability Density Function (PDF) matching against co-located in situ measurements; and then to combine the bias-corrected satellite data with the in situ measurements through the Optimal Interpolation (OI) method. The BASS SSS product is on a 1° by 1° grid over the global ocean for a 7-year period from 2010. Combined with the NOAA/NCEP/CPC CMORPH satellite precipitation (P) estimates and the Climate Forecast System Reanalysis (CFSR) evaporation (E) fields, a suite of monthly package of the SSS and oceanic freshwater flux (E and P) was developed to monitor the global oceanic water cycle and SSS on a monthly basis. The SSS in BASS product is a suite of long-term SSS and fresh water flux data sets with temporal homogeneity and inter-component consistency better suited for the examination of the long-term changes and monitoring. It presents complete spatial coverage and improved resolution and accuracy, which facilitates the diagnostic analysis of the relationship and co-variability among SSS, freshwater flux, mixed layer processes, oceanic circulation, and assimilation of SSS into global models. At the AGU meeting, we will provide more details on the CPC salinity and fresh water flux data package and its applications in the monitoring and analysis of SSS variations in association with the ENSO and other major climate
TRANSHEX, 2-D Thermal Neutron Flux Distribution from Epithermal Flux in Hexagonal Geometry
International Nuclear Information System (INIS)
Patrakka, E.
1994-01-01
1 - Description of program or function: TRANSHEX is a multigroup integral transport program that determines the thermal scalar flux distribution arising from a known epithermal flux in two- dimensional hexagonal geometry. 2 - Method of solution: The program solves the isotropic collision probability equations for a region-averaged scalar flux by an iterative method. Either a successive over-relaxation or an inner-outer iteration technique is applied. Flat flux collision probabilities between trigonal space regions with white boundary condition are utilized. The effect of epithermal flux is taken into consideration as a slowing-down source that is calculated for a given spatial distribution and 1/E energy dependence of the epithermal flux
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Turbulent fluxes by "Conditional Eddy Sampling"
Siebicke, Lukas
2015-04-01
Turbulent flux measurements are key to understanding ecosystem scale energy and matter exchange, including atmospheric trace gases. While the eddy covariance approach has evolved as an invaluable tool to quantify fluxes of e.g. CO2 and H2O continuously, it is limited to very few atmospheric constituents for which sufficiently fast analyzers exist. High instrument cost, lack of field-readiness or high power consumption (e.g. many recent laser-based systems requiring strong vacuum) further impair application to other tracers. Alternative micrometeorological approaches such as conditional sampling might overcome major limitations. Although the idea of eddy accumulation has already been proposed by Desjardin in 1972 (Desjardin, 1977), at the time it could not be realized for trace gases. Major simplifications by Businger and Oncley (1990) lead to it's widespread application as 'Relaxed Eddy Accumulation' (REA). However, those simplifications (flux gradient similarity with constant flow rate sampling irrespective of vertical wind velocity and introduction of a deadband around zero vertical wind velocity) have degraded eddy accumulation to an indirect method, introducing issues of scalar similarity and often lack of suitable scalar flux proxies. Here we present a real implementation of a true eddy accumulation system according to the original concept. Key to our approach, which we call 'Conditional Eddy Sampling' (CES), is the mathematical formulation of conditional sampling in it's true form of a direct eddy flux measurement paired with a performant real implementation. Dedicated hardware controlled by near-real-time software allows full signal recovery at 10 or 20 Hz, very fast valve switching, instant vertical wind velocity proportional flow rate control, virtually no deadband and adaptive power management. Demonstrated system performance often exceeds requirements for flux measurements by orders of magnitude. The system's exceptionally low power consumption is ideal
Accuracy, convergence and stability of finite element CFD algorithms
International Nuclear Information System (INIS)
Baker, A.J.; Iannelli, G.S.; Noronha, W.P.
1989-01-01
The requirement for artificial dissipation is well understood for shock-capturing CFD procedures in aerodynamics. However, numerical diffusion is widely utilized across the board in Navier-Stokes CFD algorithms, ranging from incompressible through supersonic flow applications. The Taylor weak statement (TWS) theory is applicable to any conservation law system containing an evolutionary component, wherein the analytical modifications becomes functionally dependent on the Jacobian of the corresponding equation system flux vector. The TWS algorithm is developed for a range of fluid mechanics conservation law systems including incompressible Navier-Stokes, depth-averaged free surface hydrodynamic Navier-Stokes, and the compressible Euler and Navier-Stokes equations. This paper presents the TWS statement for the problem class range and highlights the important theoretical issues of accuracy, convergence and stability. Numerical results for a variety of benchmark problems are presented to document key features. 8 refs
Anisotropic conductivity imaging with MREIT using equipotential projection algorithm
Energy Technology Data Exchange (ETDEWEB)
Degirmenci, Evren [Department of Electrical and Electronics Engineering, Mersin University, Mersin (Turkey); Eyueboglu, B Murat [Department of Electrical and Electronics Engineering, Middle East Technical University, 06531, Ankara (Turkey)
2007-12-21
Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.
A finite element calculation of flux pumping
Campbell, A. M.
2017-12-01
A flux pump is not only a fascinating example of the power of Faraday’s concept of flux lines, but also an attractive way of powering superconducting magnets without large electronic power supplies. However it is not possible to do this in HTS by driving a part of the superconductor normal, it must be done by exceeding the local critical density. The picture of a magnet pulling flux lines through the material is attractive, but as there is no direct contact between flux lines in the magnet and vortices, unless the gap between them is comparable to the coherence length, the process must be explicable in terms of classical electromagnetism and a nonlinear V-I characteristic. In this paper a simple 2D model of a flux pump is used to determine the pumping behaviour from first principles and the geometry. It is analysed with finite element software using the A formulation and FlexPDE. A thin magnet is passed across one or more superconductors connected to a load, which is a large rectangular loop. This means that the self and mutual inductances can be calculated explicitly. A wide strip, a narrow strip and two conductors are considered. Also an analytic circuit model is analysed. In all cases the critical state model is used, so the flux flow resistivity and dynamic resistivity are not directly involved, although an effective resistivity appears when J c is exceeded. In most of the cases considered here is a large gap between the theory and the experiments. In particular the maximum flux transferred to the load area is always less than the flux of the magnet. Also once the threshold needed for pumping is exceeded the flux in the load saturates within a few cycles. However the analytic circuit model allows a simple modification to allow for the large reduction in I c when the magnet is over a conductor. This not only changes the direction of the pumped flux but leads to much more effective pumping.
Schafer, Thibaut; Niederhauser, Elena-Lavinia; Magnin, Gabriel; Vuarnoz, Didier
2018-01-01
Standard algorithm of building’s energy strategy often use electricity and its tariff as the sole criterion of choice. This paper introduced an algorithmic regulation using global warming potential (GWP) of energy flux, to select which installation will satisfy the building energy demand (BED). In the frame of the Correlation Carbon project conducted by the Smart Living Lab (SLL), a research center dedicated to the building of the future, this paper presents the algorithm behind the design, t...
A technical basis for the flux corrected local conditions critical heat flux correlation
International Nuclear Information System (INIS)
Luxat, J.C.
2008-01-01
The so-called 'flux-corrected' local conditions CHF correlation was developed at Ontario Hydro in the 1980's and was demonstrated to successfully correlate the Onset of Intermittent Dryout (OID) CHF data for 37-element fuel with a downstream-skewed axial heat flux distribution. However, because the heat flux correction factor appeared to be an ad-hoc, albeit a successful modifying factor in the correlation, there was reluctance to accept the correlation more generally. This paper presents a thermalhydraulic basis, derived from two-phase flow considerations, that supports the appropriateness of the heat flux correction as a local effects modifying factor. (author)
Improved autonomous star identification algorithm
International Nuclear Information System (INIS)
Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong
2015-01-01
The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)
Portable Health Algorithms Test System
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Quantum algorithm for linear regression
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
MAGNETIC FLUX EXPULSION IN STAR FORMATION
International Nuclear Information System (INIS)
Zhao Bo; Li Zhiyun; Nakamura, Fumitaka; Krasnopolsky, Ruben; Shang, Hsien
2011-01-01
Stars form in dense cores of magnetized molecular clouds. If the magnetic flux threading the cores is dragged into the stars, the stellar field would be orders of magnitude stronger than observed. This well-known 'magnetic flux problem' demands that most of the core magnetic flux be decoupled from the matter that enters the star. We carry out the first exploration of what happens to the decoupled magnetic flux in three dimensions, using a magnetohydrodynamic (MHD) version of the ENZO adaptive mesh refinement code. The field-matter decoupling is achieved through a sink particle treatment, which is needed to follow the protostellar accretion phase of star formation. We find that the accumulation of the decoupled flux near the accreting protostar leads to a magnetic pressure buildup. The high pressure is released anisotropically along the path of least resistance. It drives a low-density expanding region in which the decoupled magnetic flux is expelled. This decoupling-enabled magnetic structure has never been seen before in three-dimensional MHD simulations of star formation. It generates a strong asymmetry in the protostellar accretion flow, potentially giving a kick to the star. In the presence of an initial core rotation, the structure presents an obstacle to the formation of a rotationally supported disk, in addition to magnetic braking, by acting as a rigid magnetic wall that prevents the rotating gas from completing a full orbit around the central object. We conclude that the decoupled magnetic flux from the stellar matter can strongly affect the protostellar collapse dynamics.
A model for heliospheric flux-ropes
Nieves-Chinchilla, T.; Linton, M.; Vourlidas, A.; Hidalgo, M. A. U.
2017-12-01
This work is presents an analytical flux-rope model, which explores different levels of complexity starting from a circular-cylindrical geometry. The framework of this series of models was established by Nieves-Chinchilla et al. 2016 with the circular-cylindrical analytical flux rope model. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in a non-orthogonal geometry. The Maxwell equations are solved using tensor calculus consistent with the geometry chosen, invariance along the axial direction, and with the assumption of no radial current density. The model is generalized in terms of the radial and azimuthal dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for several example profiles of the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. For reconstruction of the heliospheric flux-ropes, the circular-cylindrical reconstruction technique has been adapted to the new geometry and applied to in situ ICMEs with a flux-rope entrained and tested with cases with clear in situ signatures of distortion. The model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures that should be evaluated with the ultimate goal of reconciling in-situ reconstructions with imaging 3D remote sensing CME reconstructions. Other effects such as axial curvature and/or expansion could be incorporated in the future to fully understand the magnetic structure.
Flux Cancellation Leading to CME Filament Eruptions
Popescu, Roxana M.; Panesar, Navdeep K.; Sterling, Alphonse C.; Moore, Ronald L.
2016-01-01
Solar filaments are strands of relatively cool, dense plasma magnetically suspended in the lower density hotter solar corona. They trace magnetic polarity inversion lines (PILs) in the photosphere below, and are supported against gravity at heights of up to approx.100 Mm above the chromosphere by the magnetic field in and around them. This field erupts when it is rendered unstable, often by magnetic flux cancellation or emergence at or near the PIL. We have studied the evolution of photospheric magnetic flux leading to ten observed filament eruptions. Specifically, we look for gradual magnetic changes in the neighborhood of the PIL prior to and during eruption. We use Extreme Ultraviolet (EUV) images from the Atmospheric Imaging Assembly (AIA), and magnetograms from the Helioseismic and Magnetic Imager (HMI), both on board the Solar Dynamics Observatory (SDO), to study filament eruptions and their photospheric magnetic fields. We examine whether flux cancellation or/and emergence leads to filament eruptions. We find that continuous flux cancellation was present at the PIL for many hours prior to each eruption. We present two CME-producing eruptions in detail and find the following: (a) the pre-eruption filament-holding core field is highly sheared and appears in the shape of a sigmoid above the PIL; (b) at the start of the eruption the opposite arms of the sigmoid reconnect in the middle above the site of (tether-cutting) flux cancellation at the PIL; (c) the filaments first show a slow-rise, followed by a fast-rise as they erupt. We conclude that these two filament eruptions result from flux cancellation in the middle of the sheared field, and thereafter evolve in agreement with the standard model for a CME/flare filament eruption from a closed bipolar magnetic field [flux cancellation (van Ballegooijen and Martens 1989 and Moore and Roumelrotis 1992) and runaway tether-cutting (Moore et. al 2001)].
Worm Algorithm for CP(N-1) Model
Rindlisbacher, Tobias
2017-01-01
The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...
Inverse Monte Carlo: a unified reconstruction algorithm for SPECT
International Nuclear Information System (INIS)
Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.
1985-01-01
Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter
Array architectures for iterative algorithms
Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas
1987-01-01
Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Instance-specific algorithm configuration
Malitsky, Yuri
2014-01-01
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,
Subcubic Control Flow Analysis Algorithms
DEFF Research Database (Denmark)
Midtgaard, Jan; Van Horn, David
We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
International Nuclear Information System (INIS)
Li, Yanhao; Wang, Guangjun; Chen, Hong
2015-01-01
The predictive control theory is utilized for the research of a simultaneous estimation of heat fluxes through the upper, side and lower surface of a steel slab in a walking beam type rolling steel reheating furnace. An inverse algorithm based on dynamic matrix control (DMC) is established. That is, each surface heat flux of a slab is simultaneously estimated through rolling optimization on the basis of temperature measurements in selected points of its interior by utilizing step response function as predictive model of a slab's temperature. The reliability of the DMC results is enhanced without prior assuming specific functions of heat fluxes over a period of future time. The inverse algorithm proposed a respective regularization to effectively improve the stability of the estimated results by considering obvious strength differences between the upper as well as lower and side surface heat fluxes of the slab. - Highlights: • The predictive control theory is adopted. • An inversion scheme based on DMC is established. • Upper, side and lower surface heat fluxes of slab are estimated based DMC. • A respective regularization is proposed to improve the stability of results
Measuring Convective Mass Fluxes Over Tropical Oceans
Raymond, David
2017-04-01
Deep convection forms the upward branches of all large-scale circulations in the tropics. Understanding what controls the form and intensity of vertical convective mass fluxes is thus key to understanding tropical weather and climate. These mass fluxes and the corresponding conditions supporting them have been measured by recent field programs (TPARC/TCS08, PREDICT, HS3) in tropical disturbances considered to be possible tropical storm precursors. In reality, this encompasses most strong convection in the tropics. The measurements were made with arrays of dropsondes deployed from high altitude. In some cases Doppler radar provided additional measurements. The results are in some ways surprising. Three factors were found to control the mass flux profiles, the strength of total surface heat fluxes, the column-integrated relative humidity, and the low to mid-tropospheric moist convective instability. The first two act as expected, with larger heat fluxes and higher humidity producing more precipitation and stronger lower tropospheric mass fluxes. However, unexpectedly, smaller (but still positive) convective instability produces more precipitation as well as more bottom-heavy convective mass flux profiles. Furthermore, the column humidity and the convective instability are anti-correlated, at least in the presence of strong convection. On spatial scales of a few hundred kilometers, the virtual temperature structure appears to be in dynamic balance with the pattern of potential vorticity. Since potential vorticity typically evolves on longer time scales than convection, the potential vorticity pattern plus the surface heat fluxes then become the immediate controlling factors for average convective properties. All measurements so far have taken place in regions with relatively flat sea surface temperature (SST) distributions. We are currently seeking funding for a measurement program in the tropical east Pacific, a region that exhibits strong SST gradients and
Modelling of Power Fluxes during Thermal Quenches
International Nuclear Information System (INIS)
Konz, C.; Coster, D. P.; Lackner, K.; Pautasso, G.
2005-01-01
Plasma disruptions, i. e. the sudden loss of magnetic confinement, are unavoidable, at least occasionally, in present day and future tokamaks. The expected energy fluxes to the plasma facing components (PFCs) during disruptions in ITER lie in the range of tens of GW/m''2 for timescales of about a millisecond. Since high energy fluxes can cause severe damage to the PFCs, their design heavily depends on the spatial and temporal distribution of the energy fluxes during disruptions. We investigate the nature of power fluxes during the thermal quench phase of disruptions by means of numerical simulations with the B2 SOLPS fluid code. Based on an ASDEX Upgrade shot, steady-state pre-disruption equilibria are generated which are then subjected to a simulated thermal quench by artificially enhancing the perpendicular transport in the ion and electron channels. The enhanced transport coefficients flows the Rechester and Rosenbluth model (1978) for ergodic transport in a tokamak with destroyed flux surfaces, i. e. χ, D∼const. xT''5/2 where the constants differ by the square root of the mass ratio for ions and electrons. By varying the steady-state neutral puffing rate we can modify the divertor conditions in terms of plasma temperature and density. Our numerical findings indicate that the disruption characteristics depend on the pre disruptive divertor conditions. We study the timescales and the spatial distribution of the divertor power fluxes. The simulated disruptions show rise and decay timescales in the range observed at ASDEX Upgrade. The decay timescale for the central electron temperature of ∼800 μs is typical for non-ITB disruptions. Varying the divertor conditions we find a distinct transition from a regime with symmetric power fluxes to inboard and outboard divertors to a regime where the bulk of the power flux goes to the outboard divertor. This asymmetry in the divertor peak fluxes for the higher puffing case is accompanied by a time delay between the
Turbulent Fogwater Flux Measurements Above A Forest
Burkard, R.; Eugster, W.; Buetzberger, P.; Siegwolf, R.
Many forest ecosystems in elevated regions receive a significant fraction of their wa- ter and nutrient input by the interception of fogwater. Recently, several studies have demonstrated the suitability of the eddy covariance technique for the direct measure- ment of turbulent liquid water fluxes. Since summer 2001 a fogwater flux measure- ment equipment has been running at a montane site above a mixed forest canopy in Switzerland. The measurement equipment consists of a high-speed size-resolving droplet spectrometer and a three-dimensional ultrasonic anemometer. The chemical composition of the fogwater was determined from samples collected with a modified Caltech active strand collector. The deposition of nutrients by fog (occult deposition) was calculated by multiplying the total fogwater flux (total of measured turbulent and calculated gravitational flux) during each fog event by the ionic concentrations found in the collected fogwater. Several uncertainties still exist as far as the accuracy of the measurements is con- cerned. Although there is no universal statistical approach for testing the quality of the liquid water flux data directly, results of independent data quality checks of the two time series involved in the flux computation and accordingly the two instruments (ultrasonic anemometer and the droplet spectrometer) are presented. Within the measurement period, over 80 fog events with a duration longer than 2.5 hours were analyzed. An enormous physical and chemical heterogeneity among these fog events was found. We assume that some of this heterogeneity is due to the fact that fog or cloud droplets are not conservative entities: the turbulent flux of fog droplets, which can be referred to as the liquid water flux, is affected by phase change processes and coagulation. The measured coexistence of upward fluxes of small fog droplets (di- ameter < 10 µm) with the downward transport of larger droplets indicates the influ- ence of such processes. With the
Turbulent transport across invariant canonical flux surfaces
International Nuclear Information System (INIS)
Hollenberg, J.B.; Callen, J.D.
1994-07-01
Net transport due to a combination of Coulomb collisions and turbulence effects in a plasma is investigated using a fluid moment description that allows for kinetic and nonlinear effects via closure relations. The model considered allows for ''ideal'' turbulent fluctuations that distort but preserve the topology of species-dependent canonical flux surfaces ψ number-sign,s triple-bond ∫ dF · B number-sign,s triple-bond ∇ x [A + (m s /q s )u s ] in which u s is the flow velocity of the fluid species. Equations for the net transport relative to these surfaces due to ''nonideal'' dissipative processes are found for the total number of particles and total entropy enclosed by a moving canonical flux surface. The corresponding particle transport flux is calculated using a toroidal axisymmetry approximation of the ideal surfaces. The resulting Lagrangian transport flux includes classical, neoclassical-like, and anomalous contributions and shows for the first time how these various contributions should be summed to obtain the total particle transport flux
Neutron flux enhancement in the NRAD reactor
International Nuclear Information System (INIS)
Weeks, A.A.; Heidel, C.C.; Imel, G.R.
1988-01-01
In 1987 a series of experiments were conducted at the NRAD reactor facility at Argonne National Laboratory - West (ANL-W) to investigate the possibility of increasing the thermal neutron content at the end of the reactor's east beam tube through the use of hydrogenous flux traps. It was desired to increase the thermal flux for a series of experiments to be performed in the east radiography cell, in which the enhanced flux was required in a relatively small volume. Hence, it was feasible to attempt to focus the cross section of the beam to a smaller area. Two flux traps were constructed from unborated polypropylene and tested to determine their effectiveness. Both traps were open to the entire cross-sectional area of the neutron beam (as it emerges from the wall and enters the beam room). The sides then converged such that at the end of the trap the beam would be 'focused' to a greater intensity. The differences in the two flux traps were primarily in length, and hence angle to the beam as the inlet and outlet cross-sectional areas were held constant. It should be noted that merely placing a slab of polypropylene in the beam will not yield significant multiplication as neutrons are primarily scattered away
Neutron flux measurement utilizing Campbell technique
International Nuclear Information System (INIS)
Kropik, M.
2000-01-01
Application of the Campbell technique for the neutron flux measurement is described in the contribution. This technique utilizes the AC component (noise) of a neutron chamber signal rather than a usually used DC component. The Campbell theorem, originally discovered to describe noise behaviour of valves, explains that the root mean square of the AC component of the chamber signal is proportional to the neutron flux (reactor power). The quadratic dependence of the reactor power on the root mean square value usually permits to accomplish the whole current power range of the neutron flux measurement by only one channel. Further advantage of the Campbell technique is that large pulses of the response to neutrons are favoured over small pulses of the response to gamma rays in the ratio of their mean square charge transfer and thus, the Campbell technique provides an excellent gamma rays discrimination in the current operational range of a neutron chamber. The neutron flux measurement channel using state of the art components was designed and put into operation. Its linearity, accuracy, dynamic range, time response and gamma discrimination were tested on the VR-1 nuclear reactor in Prague, and behaviour under high neutron flux (accident conditions) was tested on the TRIGA nuclear reactor in Vienna. (author)
CO2 flux from Javanese mud volcanism.
Queißer, M; Burton, M R; Arzilli, F; Chiarugi, A; Marliyani, G I; Anggara, F; Harijoko, A
2017-06-01
Studying the quantity and origin of CO 2 emitted by back-arc mud volcanoes is critical to correctly model fluid-dynamical, thermodynamical, and geochemical processes that drive their activity and to constrain their role in the global geochemical carbon cycle. We measured CO 2 fluxes of the Bledug Kuwu mud volcano on the Kendeng Fold and thrust belt in the back arc of Central Java, Indonesia, using scanning remote sensing absorption spectroscopy. The data show that the expelled gas is rich in CO 2 with a volume fraction of at least 16 vol %. A lower limit CO 2 flux of 1.4 kg s -1 (117 t d -1 ) was determined, in line with the CO 2 flux from the Javanese mud volcano LUSI. Extrapolating these results to mud volcanism from the whole of Java suggests an order of magnitude total CO 2 flux of 3 kt d -1 , comparable with the expected back-arc efflux of magmatic CO 2 . After discussing geochemical, geological, and geophysical evidence we conclude that the source of CO 2 observed at Bledug Kuwu is likely a mixture of thermogenic, biogenic, and magmatic CO 2 , with faulting controlling potential pathways for magmatic fluids. This study further demonstrates the merit of man-portable active remote sensing instruments for probing natural gas releases, enabling bottom-up quantification of CO 2 fluxes.
The Flux Database Concerted Action (invited paper)
International Nuclear Information System (INIS)
Mitchell, N.G.; Donnelly, C.E.
2000-01-01
The background to the IUR action on the development of a flux database for radionuclide transfer in soil-plant systems is summarised. The action is discussed in terms of the objectives, the deliverables and the progress achieved by the flux database working group. The paper describes the background to the current initiative, outlines specific features of the database and supporting documentation, and presents findings from the working group's activities. The aim of the IUR flux database working group is to bring together researchers to collate data from current experimental studies investigating aspects of radionuclide transfer in soil-plant systems. The database will incorporate parameters describing the time-dependent transfer of radionuclides between soil, plant and animal compartments. Work under the EC Concerted Action considers soil-plant interactions. This initiative has become known as the radionuclide flux database. It is emphasised that the word flux is used in this case simply to indicate the flow of radionuclides between compartments in time. (author)
Planar graphs theory and algorithms
Nishizeki, T
1988-01-01
Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.
Optimally stopped variational quantum algorithms
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Fluid-structure-coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Aggregation Algorithms in Heterogeneous Tables
Directory of Open Access Journals (Sweden)
Titus Felix FURTUNA
2006-01-01
Full Text Available The heterogeneous tables are most used in the problem of aggregation. A solution for this problem is to standardize these tables of figures. In this paper, we proposed some methods of aggregation based on the hierarchical algorithms.
Designing algorithms using CAD technologies
Directory of Open Access Journals (Sweden)
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
International Nuclear Information System (INIS)
Feng, W.J.; Gao, S.W.
2014-01-01
Highlights: • Magnetoelastic problem for a superconducting cylinder with a hole is investigated. • The effects of both flux creep and viscous flux flow on stresses are analyzed. • For the FC case, the maximal hoop tensile stress always occurs at hole edge. • For the ZFC case, the maximal hoop stress is not certain to occur at hole edge. - Abstract: The magnetoelastic problem for a superconducting cylinder with a concentric hole placed in a magnetic field is investigated, where the flux creep and viscous flux flow have been considered. The stress distributions are derived and numerical calculated for the descending field in both the zero-field cooling (ZFC) and field cooling (FC) processes. The effects of applied magnetic field, flux creep and viscous flux flow on the maximal radial and hoop stresses are discussed in detail, and some novel phenomena are found. Among others, for the FC case, the maximal hoop tensile stress always occurs at the hole edge, whist for the ZFC case, the maximal stresses including both hoop and radial stresses either occur in the vicinity of the hole or occur at the position of flux frontier in the remagnetization process. For the descending field, in general, both the flux creep and viscosity parameters have important effects on the maximal radial and hoop stresses. All these phenomena are perhaps of vital importance for the application of superconductors
Fast Flux Watch: A mechanism for online detection of fast flux networks
Directory of Open Access Journals (Sweden)
Basheer N. Al-Duwairi
2014-07-01
Full Text Available Fast flux networks represent a special type of botnets that are used to provide highly available web services to a backend server, which usually hosts malicious content. Detection of fast flux networks continues to be a challenging issue because of the similar behavior between these networks and other legitimate infrastructures, such as CDNs and server farms. This paper proposes Fast Flux Watch (FF-Watch, a mechanism for online detection of fast flux agents. FF-Watch is envisioned to exist as a software agent at leaf routers that connect stub networks to the Internet. The core mechanism of FF-Watch is based on the inherent feature of fast flux networks: flux agents within stub networks take the role of relaying client requests to point-of-sale websites of spam campaigns. The main idea of FF-Watch is to correlate incoming TCP connection requests to flux agents within a stub network with outgoing TCP connection requests from the same agents to the point-of-sale website. Theoretical and traffic trace driven analysis shows that the proposed mechanism can be utilized to efficiently detect fast flux agents within a stub network.
TropFlux: air-sea fluxes for the global tropical oceans-description and evaluation
Digital Repository Service at National Institute of Oceanography (India)
PraveenKumar, B.; Vialard, J.; Lengaigne, M.; Murty, V.S.N.; McPhaden, M.J.
This paper evaluates several timely, daily air-sea heat flux products (NCEP, NCEP2, ERA-Interim and OAFlux/ISCCP) against observations and present the newly developed TropFlux product. This new product uses bias-corrected ERA-interim and ISCCP data...
About Merging Threshold and Critical Flux Concepts into a Single One: The Boundary Flux
Directory of Open Access Journals (Sweden)
Marco Stoller
2014-01-01
Full Text Available In the last decades much effort was put in understanding fouling phenomena on membranes. One successful approach to describe fouling issues on membranes is the critical flux theory. The possibility to measure a maximum value of the permeate flux for a given system without incurring in fouling issues was a breakthrough in membrane process design. However, in many cases critical fluxes were found to be very low, lower than the economic feasibility of the process. The knowledge of the critical flux value must be therefore considered as a good starting point for process design. In the last years, a new concept was introduced, the threshold flux, which defines the maximum permeate flow rate characterized by a low constant fouling rate regime. This concept, more than the critical flux, is a new practical tool for membrane process designers. In this paper a brief review on critical and threshold flux will be reported and analyzed. And since the concepts share many common aspects, merged into a new concept, called the boundary flux, the validation will occur by the analysis of previously collected data by the authors, during the treatment of olive vegetation wastewater by ultrafiltration and nanofiltration membranes.
Velthof, G.L.; Oenema, O.
1995-01-01
Accurate estimates of total nitrous oxide (N2O) losses from grasslands derived from flux-chamber measurements are hampered by the large spatial and temporal variability of N2O fluxes from these sites. In this study, four methods for the calculation o
EL-2 reactor: Thermal neutron flux distribution; EL-2: Repartition du flux de neutrons thermiques
Energy Technology Data Exchange (ETDEWEB)
Rousseau, A; Genthon, J P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1958-07-01
The flux distribution of thermal neutrons in EL-2 reactor is studied. The reactor core and lattices are described as well as the experimental reactor facilities, in particular, the experimental channels and special facilities. The measurement shows that the thermal neutron flux increases in the central channel when enriched uranium is used in place of natural uranium. However the thermal neutron flux is not perturbed in the other reactor channels by the fuel modification. The macroscopic flux distribution is measured according the radial positioning of fuel rods. The longitudinal neutron flux distribution in a fuel rod is also measured and shows no difference between enriched and natural uranium fuel rods. In addition, measurements of the flux distribution have been effectuated for rods containing other material as steel or aluminium. The neutron flux distribution is also studied in all the experimental channels as well as in the thermal column. The determination of the distribution of the thermal neutron flux in all experimental facilities, the thermal column and the fuel channels has been made with a heavy water level of 1825 mm and is given for an operating power of 1000 kW. (M.P.)
On the choice of the driving temperature for eddy-covariance carbon dioxide flux partitioning
Directory of Open Access Journals (Sweden)
G. Lasslop
2012-12-01
Full Text Available Networks that merge and harmonise eddy-covariance measurements from many different parts of the world have become an important observational resource for ecosystem science. Empirical algorithms have been developed which combine direct observations of the net ecosystem exchange of carbon dioxide with simple empirical models to disentangle photosynthetic (GPP and respiratory fluxes (R_{eco}. The increasing use of these estimates for the analysis of climate sensitivities, model evaluation and calibration demands a thorough understanding of assumptions in the analysis process and the resulting uncertainties of the partitioned fluxes. The semi-empirical models used in flux partitioning algorithms require temperature observations as input, but as respiration takes place in many parts of an ecosystem, it is unclear which temperature input – air, surface, bole, or soil at a specific depth – should be used. This choice is a source of uncertainty and potential biases. In this study, we analysed the correlation between different temperature observations and nighttime NEE (which equals nighttime respiration across FLUXNET sites to understand the potential of the different temperature observations as input for the flux partitioning model. We found that the differences in the correlation between different temperature data streams and nighttime NEE are small and depend on the selection of sites. We investigated the effects of the choice of the temperature data by running two flux partitioning algorithms with air and soil temperature. We found the time lag (phase shift between air and soil temperatures explains the differences in the GPP and R_{eco} estimates when using either air or soil temperatures for flux partitioning. The impact of the source of temperature data on other derived ecosystem parameters was estimated, and the strongest impact was found for the temperature sensitivity. Overall, this study suggests that the
A filtered backprojection algorithm with characteristics of the iterative landweber algorithm
L. Zeng, Gengsheng
2012-01-01
Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Autonomous algorithms for image restoration
Griniasty , Meir
1994-01-01
We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.
Algorithms and Public Service Media
Sørensen, Jannick Kirk; Hutchinson, Jonathon
2018-01-01
When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a pra...
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
Algorithm for programming function generators
International Nuclear Information System (INIS)
Bozoki, E.
1981-01-01
The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described
Neutronic rebalance algorithms for SIMMER
International Nuclear Information System (INIS)
Soran, P.D.
1976-05-01
Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques