An Algorithm for Induction Motor Stator Flux Estimation
Directory of Open Access Journals (Sweden)
STOJIC, D. M.
2012-08-01
Full Text Available A new method for the induction motor stator flux estimation used in the sensorless IM drive applications is presented in this paper. Proposed algorithm advantageously solves problems associated with the pure integration, commonly used for the stator flux estimation. An observer-based structure is proposed based on the stator flux vector stationary state, in order to eliminate the undesired DC offset component present in the integrator based stator flux estimates. By using a set of simulation runs it is shown that the proposed algorithm enables the DC-offset free stator flux estimated for both low and high stator frequency induction motor operation.
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Flux estimation algorithms for electric drives: a comparative study
Koteich , Mohamad
2016-01-01
International audience; This paper reviews the stator flux estimation algorithms applied to the alternating current motor drives. The so-called voltage model estimation, which consists of integrating the back-electromotive force signal, is addressed. However, in practice , the pure integration is prone to drift problems due to noises, measurement error, stator resistance uncertainty and unknown initial conditions. This limitation becomes more restrictive at low speed operation. Several soluti...
Flux mapping algorithm (FMA) for 700 MWe PHWR
International Nuclear Information System (INIS)
Sonavani, Manoj; Ingle, V.J.; Singhvi, P.K.; Raj, Manish; Fernando, M.P.S.; Kumar, A.N.
2012-01-01
For large reactor like 700 MWe PHWR effective spatial control is essential and is provided by RRS. For spatial control purpose reactor core is divided into 14 power zones. Corresponding to each zone is a light water zonal compartment. The 14 ZCCs are located in two radial planes, each containing 7 ZCCs. For each zone, power measurement is carried out using inconel (3 pitch long) self powered neutron detector (SPND) at appropriate location close to the respective ZCC. Since the zone power as obtained by the healthy zone control detector (ZCD) reading belonging to a particular zone may not correspond to its actual power because the detector per zone, measure only average fluxes but the zone extends over a large core region. Therefore accurate estimation of zone power calibration factors is required to estimate the zone powers and also to provide effective spatial power control to avoid the xenon induced spatial power oscillations in large PHWRs like 700 and 540 MWe Reactors. This accurate calculation of zone power is carried out by FMS which uses λ modes in its algorithm. Flux at any point inside the reactor can be represented in terms of the linear combination of these modes. Coefficients used in the expansion are called combining coefficient. If the readings of the detectors are known, then combining coefficients can be estimated by simple matrix operations. Once these combining coefficients are known, flux at any point inside the reactor can be found. (author)
An Improved Seeding Algorithm of Magnetic Flux Lines Based on Data in 3D Space
Directory of Open Access Journals (Sweden)
Jia Zhong
2015-05-01
Full Text Available This paper will propose an approach to increase the accuracy and efficiency of seeding algorithms of magnetic flux lines in magnetic field visualization. To obtain accurate and reliable visualization results, the density of the magnetic flux lines should map the magnetic induction intensity, and seed points should determine the density of the magnetic flux lines. However, the traditional seeding algorithm, which is a statistical algorithm based on data, will produce errors when computing magnetic flux through subdivision of the plane. To achieve higher accuracy, more subdivisions should be made, which will reduce efficiency. This paper analyzes the errors made when the traditional seeding algorithm is used and gives an improved algorithm. It then validates the accuracy and efficiency of the improved algorithm by comparing the results of the two algorithms with results from the equivalent magnetic flux algorithm.
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
Inviscid flux-splitting algorithms for real gases with non-equilibrium chemistry
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1990-01-01
Formulations of inviscid flux splitting algorithms for chemical nonequilibrium gases are presented. A chemical system for air dissociation and recombination is described. Numerical results for one-dimensional shock tube and nozzle flows of air in chemical nonequilibrium are examined.
Scaling algorithms for the calculation of solar radiative fluxes
International Nuclear Information System (INIS)
Suzuki, Tsuneaki; Nakajima, Teruyuki; Tanaka, Masayuki
2007-01-01
We derived new scaling formulae based on the method of successive orders of scattering to calculate solar radiative flux. In this report, we demonstrate a multiple scaling method, in which we introduce scaling factors for each scattering order independently. The formula of radiative transfer by the method of successive orders of scattering cannot be solved rapidly except in the case of optically thin atmospheres. Then we further derived a double scaling method, which scales the ordinary radiative transfer equation by two scaling factors. We applied the double scaling method to two-stream and four-stream approximations of the discrete ordinates method. Comparing the results of the double scaling method with those of the delta-M method, we found that the double scaling method improved the accuracy of radiative fluxes at large solar zenith angles, especially in the optically thin region, and that in the region where multiple scattering dominates, its accuracy was comparable to that of the delta-M method. Once we determined the scaling factors appropriately, the double scaling method calculated radiative fluxes as rapidly as the delta-M method in the two-stream and four-stream approximations. This method, therefore, is useful for accurate computation of solar radiative fluxes in general circulation models
Flux-split algorithms for flows with non-equilibrium chemistry and vibrational relaxation
Grossman, B.; Cinnella, P.
1990-01-01
The present consideration of numerical computation methods for gas flows with nonequilibrium chemistry thermodynamics gives attention to an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Flux-splitting procedures are developed for the fully-coupled inviscid equations encompassing fluid dynamics and both chemical and internal energy-relaxation processes. A fully coupled and implicit large-block structure is presented which embodies novel forms of flux-vector split and flux-difference split algorithms valid for nonequilibrium flow; illustrative high-temperature shock tube and nozzle flow examples are given.
Improved semianalytic algorithms for finding the flux from a cylindrical source
International Nuclear Information System (INIS)
Wallace, O.J.
1992-01-01
Hand-calculation methods involving semianalytic approximations of exact flux formulas continue to be useful in shielding calculations because they enable shield design personnel to make quick estimates of dose rates, check calculations made be more exact and time-consuming methods, and rapidly determine the scope of problems. They are also a valuable teaching tool. The most useful approximate flux formula is that for the flux at a lateral detector point from a cylindrical source with an intervening slab shield. Such an approximate formula is given by Rockwell. An improved formula for this case is given by Ono and Tsuro. Shure and Wallace also give this formula together with function tables and a detailed survey of its accuracy. The second section of this paper provides an algorithm for significantly improving the accuracy of the formula of Ono and Tsuro. The flux at a detector point outside the radial and axial extensions of a cylindrical source, again with an intervening slab shield, is another case of interest, but nowhere in the literature is this arrangement of source, shield, and detector point treated. In the third section of this paper, an algorithm for this case is given, based on superposition of sources and the algorithm of Section II. 6 refs., 1 fig., 1 tab
Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.
2007-01-01
To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.
Flux-split algorithms for flows with non-equilibrium chemistry and thermodynamics
Cinnella, Pasquale
New flux-split algorithms are developed for high velocity, high temperature flow situations, when finite-rate chemistry and non-equilibrium thermodynamics greatly affect the physics of the problem. Two-vector-split algorithms, of the Steger-Warming and of the Van Leer type, and one flux-difference-split algorithm of the Roe type are established and utilized for the accurate numerical simulation of flows with dissociation, ionization, and combustion phenomena. Several thermodynamic models are used, including a simplified vibrational non-eqilibrium model and an equilibrium model based upon refined statistical mechanical properties. The framework provided is flexible enough to accommodate virtually any chemical model and a wide range of non-equilibrium, multi-temperature thermodynamic models. A theoretical study of the main features of flows with free electrons, for conditions that require the use of two translational temperatures in the thermal model, is developed. A simple but powerful asymptotic analysis is developed which allows the establishment of the fundamental gas dynamic properties of flows with multiple translational temperatures. The new algorithms developed demonstrate their accuracy and robustness for challenging flow problems. The influence of several assumptions on the chemical and thermal behavior of the flows is investigated, and a comparison with results obtained using different numerical approaches, in particular spectral methods, is provided, and proves to be favorable to the present techniques.
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
Optimal Design of the Transverse Flux Machine Using a Fitted Genetic Algorithm with Real Parameters
DEFF Research Database (Denmark)
Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika
2012-01-01
This paper applies a fitted genetic algorithm (GA) to the optimal design of transverse flux machine (TFM). The main goal is to provide a tool for the optimal design of TFM that is an easy to use. The GA optimizes the analytic basic design of two TFM topologies: the C-core and the U-core. First......, the GA was designed with real parameters. A further, objective of the fitted GA is minimization of the computation time, related to the number of individuals, the number of generations and the types of operators and their specific parameters....
Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen
2016-04-01
Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.
Data Driven Estimation of Transpiration from Net Water Fluxes: the TEA Algorithm
Nelson, J. A.; Carvalhais, N.; Cuntz, M.; Delpierre, N.; Knauer, J.; Migliavacca, M.; Ogee, J.; Reichstein, M.; Jung, M.
2017-12-01
The eddy covariance method, while powerful, can only provide a net accounting of ecosystem fluxes. Particularly with water cycle components, efforts to partitioning total evapotranspiration (ET) into the biotic component (transpiration, T) and the abiotic component (here evaporation, E) have seen limited success, with no one method emerging as a standard.Here we demonstrate a novel method that uses ecosystem WUE to predict transpiration in two steps: (1) a filtration step that to isolate the signal of ET for periods where E is minimized and ET is likely dominated by the signal of T; and (2) a step which predicts the WUE using meteorological variables, as well as information derived from the carbon and energy fluxes. To assess the the underlying assumptions, we tested the proposed method on three ecological models, allowing validation where the underlying carbon:water relationships, as well as the transpiration estimates, are know.The partitioning method shows high correlation (R²>0.8) between Tmodel/ET and TTEA/ET across timescales from half-hourly to annually, as well as capturing spatial variability across sites. Apart from predictive performance, we explore the sensitivities of the method to the underlying assumptions, such as the effects of residual evaporation in the training dataset. Furthermore, we show initial transpiration estimates from the algorithm at global scale, via the FLUXNET dataset.
Flux-corrected transport algorithms preserving the eigenvalue range of symmetric tensor quantities
Lohmann, Christoph
2017-12-01
This paper presents a new approach to constraining the eigenvalue range of symmetric tensors in numerical advection schemes based on the flux-corrected transport (FCT) algorithm and a continuous finite element discretization. In the context of element-based FEM-FCT schemes for scalar conservation laws, the numerical solution is evolved using local extremum diminishing (LED) antidiffusive corrections of a low order approximation which is assumed to satisfy the relevant inequality constraints. The application of a limiter to antidiffusive element contributions guarantees that the corrected solution remains bounded by the local maxima and minima of the low order predictor. The FCT algorithm to be presented in this paper guarantees the LED property for the maximal and minimal eigenvalues of the transported tensor at the low order evolution step. At the antidiffusive correction step, this property is preserved by limiting the antidiffusive element contributions to all components of the tensor in a synchronized manner. The definition of the element-based correction factors for FCT is based on perturbation bounds for auxiliary tensors which are constrained to be positive semidefinite to enforce the generalized LED condition. The derivation of sharp bounds involves calculating the roots of polynomials of degree up to 3. As inexpensive and numerically stable alternatives, limiting techniques based on appropriate estimates are considered. The ability of the new limiters to enforce local bounds for the eigenvalue range is confirmed by numerical results for 2D advection problems.
DEFF Research Database (Denmark)
Ravn, Ib
. FLUX betegner en flyden eller strømmen, dvs. dynamik. Forstår man livet som proces og udvikling i stedet for som ting og mekanik, får man et andet billede af det gode liv end det, som den velkendte vestlige mekanicisme lægger op til. Dynamisk forstået indebærer det gode liv den bedst mulige...... kanalisering af den flux eller energi, der strømmer igennem os og giver sig til kende i vore daglige aktiviteter. Skal vores tanker, handlinger, arbejde, samvær og politiske liv organiseres efter stramme og faste regelsæt, uden slinger i valsen? Eller skal de tværtimod forløbe ganske uhindret af regler og bånd...
30 CFR 57.5006 - Air Quality-Surface Only [Reserved
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Air Quality-Surface Only 57.5006 Section 57.5006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Air Quality, Radiation, Physical Agents, and Diesel Particulate...
Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren
2017-08-01
Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.
International Nuclear Information System (INIS)
Shi Xueming; Wu Hongchun; Sun Shouhua; Liu Shuiqing
2003-01-01
The in-core fuel management optimization model based on the genetic algorithm has been established. An encode/decode technique based on the assemblies position is presented according to the characteristics of HFETR. Different reproduction strategies have been studied. The expert knowledge and the adaptive genetic algorithms are incorporated into the code to get the optimized loading patterns that can be used in HFETR
Directory of Open Access Journals (Sweden)
Xuanyu Wang
2017-12-01
Full Text Available Terrestrial latent heat flux (LE is a key component of the global terrestrial water, energy, and carbon exchanges. Accurate estimation of LE from moderate resolution imaging spectroradiometer (MODIS data remains a major challenge. In this study, we estimated the daily LE for different plant functional types (PFTs across North America using three machine learning algorithms: artificial neural network (ANN; support vector machines (SVM; and, multivariate adaptive regression spline (MARS driven by MODIS and Modern Era Retrospective Analysis for Research and Applications (MERRA meteorology data. These three predictive algorithms, which were trained and validated using observed LE over the period 2000–2007, all proved to be accurate. However, ANN outperformed the other two algorithms for the majority of the tested configurations for most PFTs and was the only method that arrived at 80% precision for LE estimation. We also applied three machine learning algorithms for MODIS data and MERRA meteorology to map the average annual terrestrial LE of North America during 2002–2004 using a spatial resolution of 0.05°, which proved to be useful for estimating the long-term LE over North America.
McClain, Charles R.; Signorini, Sergio
2002-01-01
Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.
Lamarche, L.; Degrez, G.; Prince, A.
A method is described that combines the geometric flexibility of finite element methodology with recent developments of high-resolution finite difference schemes for hyperbolic systems of equations. It is proposed to use the standard weighted residual approach to set up the discrete equations. Upwinding is then achieved via a modified quadrature rule. The Gaussian point is chosen to match the finite difference discretization on a model scalar equation. The extension to systems of equations is then obtained following the flux-splitting approach suggested by Steger and Warming (1981) and Van Leer (1982).
A novel robust and efficient algorithm for charge particle tracking in high background flux
International Nuclear Information System (INIS)
Fanelli, C; Cisbani, E; Dotto, A Del
2015-01-01
The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)
International Nuclear Information System (INIS)
Silva, C.F. da.
1979-09-01
A new formulation of the pseudocontinuous synthesis algorithm is applied to solve the static three dimensional two-group diffusion equations. The new method avoids ambiguities regarding interface conditions, which are inherent to the differential formulation, by resorting to the finite difference version of the differential equations involved. A considerable number of input/output options, possible core configurations and control rod positioning are implemented resulting in a very flexible as well as economical code to compute 3D fluxes, power density and reactivities of PWR reactors with partial inserted control rods. The performance of this new code is checked against the IAEA 3D Benchmark problem and results show that SINT3D yields comparable accuracy with much less computing time and memory required than in conventional 3D finite differerence codes. (Author) [pt
Indian Academy of Sciences (India)
have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.
Indian Academy of Sciences (India)
algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.
Indian Academy of Sciences (India)
In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...
Energy Technology Data Exchange (ETDEWEB)
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Molina, F; Aguilera, P; Romero-Barrientos, J; Arellano, H F; Agramunt, J; Medel, J; Morales, J R; Zambra, M
2017-11-01
We present a methodology to obtain the energy distribution of the neutron flux of an experimental nuclear reactor, using multi-foil activation measurements and the Expectation Maximization unfolding algorithm, which is presented as an alternative to well known unfolding methods such as GRAVEL. Self-shielding flux corrections for energy bin groups were obtained using MCNP6 Monte Carlo simulations. We have made studies at the at the Dry Tube of RECH-1 obtaining fluxes of 1.5(4)×10 13 cm -2 s -1 for the thermal neutron energy region, 1.9(5)×10 12 cm -2 s -1 for the epithermal neutron energy region, and 4.3(11)×10 11 cm -2 s -1 for the fast neutron energy region. Copyright © 2017 Elsevier Ltd. All rights reserved.
Indian Academy of Sciences (India)
In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...
Indian Academy of Sciences (India)
algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
Hommen, G.; de M. Baar,; Citrin, J.; de Blank, H. J.; Voorhoeve, R. J.; de Bock, M. F. M.; Steinbuch, M.
2013-01-01
The flux surfaces' layout and the magnetic winding number q are important quantities for the performance and stability of tokamak plasmas. Normally, these quantities are iteratively derived by solving the plasma equilibrium for the poloidal and toroidal flux. In this work, a fast, non-iterative
International Nuclear Information System (INIS)
Vasudevan, M.; Arumugam, R.; Paramasivam, S.
2006-01-01
Field oriented control (FOC) and direct torque control (DTC) are becoming the industrial standards for induction motors torque and flux control. This paper aims to give a contribution for a detailed comparison between these two control techniques, emphasizing their advantages and disadvantages. The performance of these two control schemes is evaluated in terms of torque and flux ripple and their transient response to step variations of the torque command. Moreover, a new torque and flux ripple minimization technique is also proposed to improve the performance of the DTC drive. Based on the experimental results, the analysis has been presented
Langford, Ben; Cash, James; Acton, W. Joe F.; Valach, Amy C.; Hewitt, C. Nicholas; Fares, Silvano; Goded, Ignacio; Gruening, Carsten; House, Emily; Kalogridis, Athina-Cerise; Gros, Valérie; Schafers, Richard; Thomas, Rick; Broadmeadow, Mark; Nemitz, Eiko
2017-12-01
Biogenic emission algorithms predict that oak forests account for ˜ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs) that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5-8 and 4-5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE) model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN), we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500 µg m-2 h-1); Bosco Fontana, Italy (1610
Directory of Open Access Journals (Sweden)
B. Langford
2017-12-01
Full Text Available Biogenic emission algorithms predict that oak forests account for ∼ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5–8 and 4–5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN, we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500
Lu, Shi Jing; Salleh, Abdul Hakim Mohamed; Mohamad, Mohd Saberi; Deris, Safaai; Omatu, Sigeru; Yoshioka, Michifumi
2014-09-28
Reconstructions of genome-scale metabolic networks from different organisms have become popular in recent years. Metabolic engineering can simulate the reconstruction process to obtain desirable phenotypes. In previous studies, optimization algorithms have been implemented to identify the near-optimal sets of knockout genes for improving metabolite production. However, previous works contained premature convergence and the stop criteria were not clear for each case. Therefore, this study proposes an algorithm that is a hybrid of the ant colony optimization algorithm and flux balance analysis (ACOFBA) to predict near optimal sets of gene knockouts in an effort to maximize growth rates and the production of certain metabolites. Here, we present a case study that uses Baker's yeast, also known as Saccharomyces cerevisiae, as the model organism and target the rate of vanillin production for optimization. The results of this study are the growth rate of the model organism after gene deletion and a list of knockout genes. The ACOFBA algorithm was found to improve the yield of vanillin in terms of growth rate and production compared with the previous algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.
Directory of Open Access Journals (Sweden)
Hadi Nazem-Bokaee
2015-09-01
Full Text Available The Total Membrane Influx constrained Flux Balance Analysis (ToMI-FBA algorithm was developed in this research as a new tool to help researchers decide which microbial host and medium formulation are optimal for expressing a new metabolic pathway. ToMI-FBA relies on genome-scale metabolic flux modeling and a novel in silico cell membrane influx constraint that specifies the flux of atoms (not molecules into the cell through all possible membrane transporters. The ToMI constraint is constructed through the addition of an extra row and column to the stoichiometric matrix of a genome-scale metabolic flux model. In this research, the mathematical formulation of the ToMI constraint is given along with four case studies that demonstrate its usefulness. In Case Study 1, ToMI-FBA returned an optimal culture medium formulation for the production of isobutanol from Bacillus subtilis. Significant levels of L-valine were recommended to optimize production, and this result has been observed experimentally. In Case Study 2, it is demonstrated how the carbon to nitrogen uptake ratio can be specified as an additional ToMI-FBA constraint. This was investigated for maximizing medium chain length polyhydroxyalkanoates (mcl-PHA production from Pseudomonas putida KT2440. In Case Study 3, ToMI-FBA revealed a strategy of adding cellobiose as a means to increase ethanol selectivity during the stationary growth phase of Clostridium acetobutylicum ATCC 824. This strategy was also validated experimentally. Finally, in Case Study 4, B. subtilis was identified as a superior host to Escherichia coli, Saccharomyces cerevisiae, and Synechocystis PCC6803 for the production of artemisinate.
Sedlar, F.; Turpin, E.; Kerkez, B.
2014-12-01
As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.
Muramatsu, Kanako; Furumi, Shinobu; Daigo, Motomasa
2015-10-01
We plan to estimate gross primary production (GPP) using the SGLI sensor on-board the GCOM-C1 satellite after it is launched in 2017 by the Japan Aerospace Exploration Agency, as we have developed a GPP estimation algorithm that uses SGLI sensor data. The characteristics of this GPP estimation method correspond to photosynthesis. The rate of plant photosynthesis depends on the plant's photosynthesis capacity and the degree to which photosynthesis is suppressed. The photosynthesis capacity depends on the chlorophyll content of leaves, which is a plant physiological parameter, and the degree of suppression of photosynthesis depends on weather conditions. The framework of the estimation method to determine the light-response curve parameters was developed using ux and satellite data in a previous study[1]. We estimated one of the light-response curve parameters based on the linear relationship between GPP capacity at 2000 (μmolm-2s-1) of photosynthetically active radiation and a chlorophyll index (CIgreen [2;3] ). The relationship was determined for seven plant functional types. Decreases in the photosynthetic rate are controlled by stomatal opening and closing. Leaf stomatal conductance is maximal during the morning and decreases in the afternoon. We focused on daily changes in leaf stomatal conductance. We used open shrub flux data and MODIS reflectance data to develop an algorithm for a canopy. We first evaluated the daily changes in GPP capacity estimated from CIgreen and photosynthesis active radiation using light response curves, and GPP observed during a flux experiment. Next, we estimated the canopy conductance using flux data and a big-leaf model using the Penman-Monteith equation[4]. We estimated GPP by multiplying GPP capacity by the normalized canopy conductance at 10:30, the time of satellite observations. The results showed that the estimated daily change in GPP was almost the same as the observed GPP. From this result, we defined a normalized canopy
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Directory of Open Access Journals (Sweden)
Carlos Antonio Costa dos Santos
2010-09-01
Full Text Available O principal objetivo desse estudo foi contribuir para a compreensão das estimativas e análises espaciais dos fluxos de energia à superfície, utilizando o algoritmo S-SEBI e imagens Landsat 5 - TM, assim como, validar os resultados com medidas obtidas na torre micrometeorológica. A área de estudo foi a fazenda Frutacor, com área plantada com cultivos de bananeiras e áreas adjacentes com solo exposto e vegetação nativa. A referida área fica localizada no município de Quixeré, na microrregião do Baixo Jaguaribe, Estado do Ceará. O algoritmo S-SEBI apresentou-se como um bom estimador dos fluxos de energia à superfície, apresentando concordância com estudos desenvolvidos em diferentes partes do mundo. A comparação dos resultados estimados pelo S-SEBI com os medidos na torre micrometeorológica mostrou que os parâmetros, estimados pelo referido algoritmo, estão em boa concordância com os medidos, evidenciando assim, que o S-SEBI é uma ferramenta promissora na obtenção da distribuição espacial dos fluxos de energia à superfície em regiões semi-áridas.The main objective of this study was to contribute to the understanding of the estimates and spatial analyses of the surface energy fluxes using the S-SEBI algorithm and TM Landsat 5 images, and to validate the measurements obtained at the micrometeorological tower. The study area was the Frutacor farm, with cultivated area of banana crops, adjacent to bare soil and native vegetation areas, located at the irrigation district of Quixeré, in the Low Jaguaribe basin, Ceará State. The S-SEBI algorithm seems to be a good estimator of the surface energy fluxes in agreement to different studies developed throughout the world. The comparison between the S-SEBI and the micrometeorological results showed good agreement, evidencing that the S-SEBI algorithm is a promising tool in obtaining the spatial distribution of the surface energy fluxes over semi-arid regions.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Critical flux determination by flux-stepping
DEFF Research Database (Denmark)
Beier, Søren; Jonsson, Gunnar Eigil
2010-01-01
In membrane filtration related scientific literature, often step-by-step determined critical fluxes are reported. Using a dynamic microfiltration device, it is shown that critical fluxes determined from two different flux-stepping methods are dependent upon operational parameters such as step......, such values are more or less useless in itself as critical flux predictors, and constant flux verification experiments have to be conducted to check if the determined critical fluxes call predict sustainable flux regimes. However, it is shown that using the step-by-step predicted critical fluxes as start...
Kim, Ah Yeong; Lee, Min Woo; Cha, Dong Ik; Lim, Hyo Keun; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2016-07-01
The aim of this study was to compare the accuracy of and the time required for image fusion between real-time ultrasonography (US) and pre-procedural magnetic resonance (MR) images using automatic registration by a liver surface only method and automatic registration by a liver surface and vessel method. This study consisted of 20 patients referred for planning US to assess the feasibility of percutaneous radiofrequency ablation or biopsy for focal hepatic lesions. The first 10 consecutive patients were evaluated by an experienced radiologist using the automatic registration by liver surface and vessel method, whereas the remaining 10 patients were evaluated using the automatic registration by liver surface only method. For all 20 patients, image fusion was automatically executed after following the protocols and fused real-time US and MR images moved synchronously. The accuracy of each method was evaluated by measuring the registration error, and the time required for image fusion was assessed by evaluating the recorded data using in-house software. The results obtained using the two automatic registration methods were compared using the Mann-Whitney U-test. Image fusion was successful in all 20 patients, and the time required for image fusion was significantly shorter with the automatic registration by liver surface only method than with the automatic registration by liver surface and vessel method (median: 43.0 s, range: 29-74 s vs. median: 83.0 s, range: 46-101 s; p = 0.002). The registration error did not significantly differ between the two methods (median: 4.0 mm, range: 2.1-9.9 mm vs. median: 3.7 mm, range: 1.8-5.2 mm; p = 0.496). The automatic registration by liver surface only method offers faster image fusion between real-time US and pre-procedural MR images than does the automatic registration by liver surface and vessel method. However, the degree of accuracy was similar for the two methods. Copyright © 2016 World Federation for Ultrasound
OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis
Directory of Open Access Journals (Sweden)
Nielsen Lars K
2009-05-01
Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and
Direct Torque Control Induction Motor Drive with Improved Flux Response
Directory of Open Access Journals (Sweden)
Bhoopendra Singh
2012-01-01
Full Text Available Accurate flux estimation and control of stator flux by the flux control loop is the determining factor in effective implementation of DTC algorithm. In this paper a comparison of voltage-model-based flux estimation techniques for flux response improvement is carried out. The effectiveness of these methods is judged on the basis of Root Mean Square Flux Error (RMSFE, Total Harmonic Distortion (THD of stator current, and dynamic flux response. The theoretical aspects of these methods are discussed and a comparative analysis is provided with emphasis on digital signal processor (DSP based controller implementation. The effectiveness of the proposed flux estimation algorithm is investigated through simulation and experimentally validated on a test drive.
California's Future Carbon Flux
Xu, L.; Pyles, R. D.; Paw U, K.; Gertz, M.
2008-12-01
The diversity of the climate and vegetation systems in the state of California provides a unique opportunity to study carton dioxide exchange between the terrestrial biosphere and the atmosphere. In order to accurately calculate the carbon flux, this study couples the sophisticated analytical surface layer model ACASA (Advance Canopy-Atmosphere-Soil Algorithm, developed in the University of California, Davis) with the newest version of mesoscale model WRF (the Weather Research & Forecasting Model, developed by NCAR and several other agencies). As a multilayer, steady state model, ACASA incorporates higher-order representations of vertical temperature variations, CO2 concentration, radiation, wind speed, turbulent statistics, and plant physiology. The WRF-ACASA coupling is designed to identify how multiple environmental factors, in particularly climate variability, population density, and vegetation distribution, impact on future carbon cycle prediction across a wide geographical range such as in California.
Validating modeled turbulent heat fluxes across large freshwater surfaces
Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.
2017-12-01
Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Scaling-up of CO2 fluxes to assess carbon sequestration in rangelands of Central Asia
Bruce K. Wylie; Tagir G. Gilmanov; Douglas A. Johnson; Nicanor Z. Saliendra; Larry L. Tieszen; Ruth Anne F. Doyle; Emilio A. Laca
2006-01-01
Flux towers provide temporal quantification of local carbon dynamics at specific sites. The number and distribution of flux towers, however, are generally inadequate to quantify carbon fluxes across a landscape or ecoregion. Thus, scaling up of flux tower measurements through use of algorithms developed from remote sensing and GIS data is needed for spatial...
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Radon flux measurement methodologies
International Nuclear Information System (INIS)
Nielson, K.K.; Rogers, V.C.
1984-01-01
Five methods for measuring radon fluxes are evaluated: the accumulator can, a small charcoal sampler, a large-area charcoal sampler, the ''Big Louie'' charcoal sampler, and the charcoal tent sampler. An experimental comparison of the five flux measurement techniques was also conducted. Excellent agreement was obtained between the measured radon fluxes and fluxes predicted from radium and emanation measurements
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
Surface Flux Modeling for Air Quality Applications
Directory of Open Access Journals (Sweden)
Limei Ran
2011-08-01
Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.
Ruzmaikin, A.
1997-01-01
Observations show that newly emerging flux tends to appear on the Solar surface at sites where there is flux already. This results in clustering of solar activity. Standard dynamo theories do not predict this effect.
International Nuclear Information System (INIS)
Madhavi, V.; Phatak, P.R.; Bahadur, C.; Bayala, A.K.; Jakati, R.K.; Sathian, V.
2003-01-01
Full text: A compact size neutron flux monitor has been developed incorporating standard boards developed for smart radiation monitors. The sensitivity of the monitors is 0.4cps/nV. It has been tested up to 2075 nV flux with standard neutron sources. It shows convincing results even in high flux areas like 6m away from the accelerator in RMC (Parel) for 106/107 nV. These monitors have a focal and remote display, alarm function with potential free contacts for centralized control and additional provision of connectivity via RS485/Ethernet. This paper describes the construction, working and results of the above flux monitor
Campbell-Brown, M. D.; Braid, D.
2011-01-01
The flux of meteoroids, or number of meteoroids per unit area per unit time, is critical for calibrating models of meteoroid stream formation and for estimating the hazard to spacecraft from shower and sporadic meteors. Although observations of meteors in the millimetre to centimetre size range are common, flux measurements (particularly for sporadic meteors, which make up the majority of meteoroid flux) are less so. It is necessary to know the collecting area and collection time for a given set of observations, and to correct for observing biases and the sensitivity of the system. Previous measurements of sporadic fluxes are summarized in Figure 1; the values are given as a total number of meteoroids striking the earth in one year to a given limiting mass. The Gr n et al. (1985) flux model is included in the figure for reference. Fluxes for sporadic meteoroids impacting the Earth have been calculated for objects in the centimeter size range using Super-Schmidt observations (Hawkins & Upton, 1958); this study used about 300 meteors, and used only the physical area of overlap of the cameras at 90 km to calculate the flux, corrected for angular speed of meteors, since a large angular speed reduces the maximum brightness of the meteor on the film, and radiant elevation, which takes into account the geometric reduction in flux when the meteors are not perpendicular to the horizontal. They bring up corrections for both partial trails (which tends to increase the collecting area) and incomplete overlap at heights other than 90 km (which tends to decrease it) as effects that will affect the flux, but estimated that the two effects cancelled one another. Halliday et al. (1984) calculated the flux of meteorite-dropping fireballs with fragment masses greater than 50 g, over the physical area of sky accessible to the MORP fireball cameras, counting only observations in clear weather. In the micron size range, LDEF measurements of small craters on spacecraft have been used to
Constrained Minimization Algorithms
Lantéri, H.; Theys, C.; Richard, C.
2013-03-01
In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... The most probable initial magnetic configuration of a CME is a flux rope consisting of twisted field lines which fill the whole volume of a dark coronal cavity. The flux ropes can be in stable equilibrium in the coronal magnetic field for weeks and even months, but suddenly they lose their stability and erupt with ...
Stochastic flux analysis of chemical reaction networks.
Kahramanoğulları, Ozan; Lynch, James F
2013-12-07
Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network.
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Energy Technology Data Exchange (ETDEWEB)
Grefenstette, J.J.
1994-12-31
Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.
Software applications for flux balance analysis.
Lakshmanan, Meiyappan; Koh, Geoffrey; Chung, Bevan K S; Lee, Dong-Yup
2014-01-01
Flux balance analysis (FBA) is a widely used computational method for characterizing and engineering intrinsic cellular metabolism. The increasing number of its successful applications and growing popularity are possibly attributable to the availability of specific software tools for FBA. Each tool has its unique features and limitations with respect to operational environment, user-interface and supported analysis algorithms. Presented herein is an in-depth evaluation of currently available FBA applications, focusing mainly on usability, functionality, graphical representation and inter-operability. Overall, most of the applications are able to perform basic features of model creation and FBA simulation. COBRA toolbox, OptFlux and FASIMU are versatile to support advanced in silico algorithms to identify environmental and genetic targets for strain design. SurreyFBA, WEbcoli, Acorn, FAME, GEMSiRV and MetaFluxNet are the distinct tools which provide the user friendly interfaces in model handling. In terms of software architecture, FBA-SimVis and OptFlux have the flexible environments as they enable the plug-in/add-on feature to aid prospective functional extensions. Notably, an increasing trend towards the implementation of more tailored e-services such as central model repository and assistance to collaborative efforts was observed among the web-based applications with the help of advanced web-technologies. Furthermore, most recent applications such as the Model SEED, FAME, MetaFlux and MicrobesFlux have even included several routines to facilitate the reconstruction of genome-scale metabolic models. Finally, a brief discussion on the future directions of FBA applications was made for the benefit of potential tool developers.
Wilson, Andrew (Inventor); Punnoose, Andrew (Inventor); Strausser, Katherine (Inventor); Parikh, Neil (Inventor)
2011-01-01
A directed flux motor described utilizes the directed magnetic flux of at least one magnet through ferrous material to drive different planetary gear sets to achieve capabilities in six actuated shafts that are grouped three to a side of the motor. The flux motor also utilizes an interwoven magnet configuration which reduces the overall size of the motor. The motor allows for simple changes to modify the torque to speed ratio of the gearing contained within the motor as well as simple configurations for any number of output shafts up to six. The changes allow for improved manufacturability and reliability within the design.
National Aeronautics and Space Administration — SolRad-Net (Solar Radiation Network) is an established network of ground-based sensors providing high-frequency solar flux measurements in quasi-realtime to the...
2004-01-01
Rahvusvahelise elektroonilise kunsti sümpoosioni ISEA2004 klubiõhtu "Flux in Tallinn" klubis Bon Bon. Eestit esindasid Ropotator, Ars Intel Inc., Urmas Puhkan, Joel Tammik, Taavi Tulev (pseud. Wochtzchee). Klubiõhtu koordinaator Andres Lõo
Determining Reactor Neutrino Flux
Cao, Jun
2011-01-01
Flux is an important source of uncertainties for a reactor neutrino experiment. It is determined from thermal power measurements, reactor core simulation, and knowledge of neutrino spectra of fuel isotopes. Past reactor neutrino experiments have determined the flux to (2-3)% precision. Precision measurements of mixing angle $\\theta_{13}$ by reactor neutrino experiments in the coming years will use near-far detector configurations. Most uncertainties from reactor will be canceled out. Understa...
Theoretical magnetic flux emergence
MacTaggart, David
2011-01-01
Magnetic flux emergence is the subject of how magnetic fields from the solar interior can rise and expand into the atmosphere to produce active regions. It is the link that joins dynamics in the convection zone with dynamics in the atmosphere. In this thesis, we study many aspects of magnetic flux emergence through mathematical modelling and computer simulations. Our primary aim is to understand the key physical processes that lie behind emergence. The first chapter intro...
Cheung, Mark C. M.; Isobe, Hiroaki
2014-07-01
Magnetic flux emergence from the solar convection zone into the overlying atmosphere is the driver of a diverse range of phenomena associated with solar activity. In this article, we introduce theoretical concepts central to the study of flux emergence and discuss how the inclusion of different physical effects (e.g., magnetic buoyancy, magnetoconvection, reconnection, magnetic twist, interaction with ambient field) in models impact the evolution of the emerging field and plasma.
3D Models of a Transversal Flux Inductor
Directory of Open Access Journals (Sweden)
POPA Monica
2014-05-01
Full Text Available This paper deals with 3D numerical models of transverse flux inductor with a flexible electromagnetic configuration developed in Flux3D software. The simplified 3D model is coupled with a simplex optimization algorithm in order to attain a maximum uniformity of the transversal profile of power developed within the metallic sheet. The complex 3D model is used for a thoroughly analysis of device.
Automated reactivity anomaly surveillance in the Fast Flux Test Facility
International Nuclear Information System (INIS)
Knutson, B.J.; Harris, R.A.; Honeyman, D.J.; Shook, A.T.; Krohn, C.N.
1985-01-01
The automated technique for monitoring core reactivity during power operation used at the Fast Flux Test Facility (FFTF) is described. This technique relies on comparing predicted to measured rod positions to detect any anomalous (or unpredicted) core reactivity changes. It is implemented on the Plant Data System (PDS) computer and, thus, provides rapid indication of any abnormal core conditions. The prediction algorithms use thermal-hydraulic, control rod position and neutron flux sensor information to predict the core reactivity state
Neutron flux monitoring device
International Nuclear Information System (INIS)
Shimazu, Yoichiro.
1995-01-01
In a neutron flux monitoring device, there are disposed a neutron flux measuring means for outputting signals in accordance with the intensity of neutron fluxes, a calculation means for calculating a self power density spectrum at a frequency band suitable to an object to be measured based on the output of the neutron flux measuring means, an alarm set value generation means for outputting an alarm set value as a comparative reference, and an alarm judging means for comparing the alarm set value with the outputted value of the calculation means to judge requirement of generating an alarm and generate an alarm in accordance with the result of the judgement. Namely, the time-series of neutron flux signals is put to fourier transformation for a predetermined period of time by the calculation means, and from each of square sums for real number component and imaginary number component for each of the frequencies, a self power density spectrum in the frequency band suitable to the object to be measured is calculated. Then, when the set reference value is exceeded, an alarm is generated. This can reliably prevent generation of erroneous alarm due to neutron flux noises and can accurately generate an alarm at an appropriate time. (N.H.)
International Nuclear Information System (INIS)
Oda, Naotaka.
1993-01-01
The device of the present invention greatly saves an analog processing section such as an analog filter and an analog processing circuit. That is, the device of the present invention comprises (1) a neutron flux detection means for detecting neutron fluxed in the reactor, (2) a digital filter means for dividing signals corresponding to the detected neutron fluxes into predetermined frequency band regions, (3) a calculation processing means for applying a calculation processing corresponding to the frequency band regions to the neutron flux detection signals divided by the digital filter means. With such a constitution, since the neutron detection signals are processed by the digital filter means, the accuracy is improved and the change for the property of the filter is facilitated. Further, when a neutron flux level is obtained, a calculation processing corresponding to the frequency band region can be conducted without the analog processing circuit. Accordingly, maintenance and accuracy are improved by greatly decreasing the number of parts. Further, since problems inherent to the analog circuit are solved, neutron fluxes are monitored at high reliability. (I.S.)
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
International Nuclear Information System (INIS)
Linker, J. A.; Caplan, R. M.; Downs, C.; Riley, P.; Mikic, Z.; Lionello, R.; Henney, C. J.; Arge, C. N.; Liu, Y.; Derosa, M. L.; Yeates, A.; Owens, M. J.
2017-01-01
The heliospheric magnetic field is of pivotal importance in solar and space physics. The field is rooted in the Sun’s photosphere, where it has been observed for many years. Global maps of the solar magnetic field based on full-disk magnetograms are commonly used as boundary conditions for coronal and solar wind models. Two primary observational constraints on the models are (1) the open field regions in the model should approximately correspond to coronal holes (CHs) observed in emission and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. In this study, we calculate both magnetohydrodynamic and potential field source surface solutions using 14 different magnetic maps produced from five different types of observatory magnetograms, for the time period surrounding 2010 July. We have found that for all of the model/map combinations, models that have CH areas close to observations underestimate the interplanetary magnetic flux, or, conversely, for models to match the interplanetary flux, the modeled open field regions are larger than CHs observed in EUV emission. In an alternative approach, we estimate the open magnetic flux entirely from solar observations by combining automatically detected CHs for Carrington rotation 2098 with observatory synoptic magnetic maps. This approach also underestimates the interplanetary magnetic flux. Our results imply that either typical observatory maps underestimate the Sun’s magnetic flux, or a significant portion of the open magnetic flux is not rooted in regions that are obviously dark in EUV and X-ray emission.
Optimal flux patterns in cellular metabolic networks
Energy Technology Data Exchange (ETDEWEB)
Almaas, E
2007-01-20
The availability of whole-cell level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30,000 random cellular environments. The distribution of reaction fluxes is heavy-tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations have relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reaction are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central-carbon metabolic pathways for the sample of random environments.
MAGNETIC FLUX CANCELLATION IN ELLERMAN BOMBS
Energy Technology Data Exchange (ETDEWEB)
Reid, A.; Mathioudakis, M.; Nelson, C. J.; Henriques, V. [Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN, Northern Ireland (United Kingdom); Doyle, J. G. [Armagh Observatory, College Hill, Armagh, BT61 9DG (United Kingdom); Scullion, E. [Trinity College Dublin, College Green, Dublin 2 (Ireland); Ray, T., E-mail: areid29@qub.ac.uk [Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2 (Ireland)
2016-06-01
Ellerman Bombs (EBs) are often found to be co-spatial with bipolar photospheric magnetic fields. We use H α imaging spectroscopy along with Fe i 6302.5 Å spectropolarimetry from the Swedish 1 m Solar Telescope (SST), combined with data from the Solar Dynamic Observatory , to study EBs and the evolution of the local magnetic fields at EB locations. EBs are found via an EB detection and tracking algorithm. Using NICOLE inversions of the spectropolarimetric data, we find that, on average, (3.43 ± 0.49) × 10{sup 24} erg of stored magnetic energy disappears from the bipolar region during EB burning. The inversions also show flux cancellation rates of 10{sup 14}–10{sup 15} Mx s{sup −1} and temperature enhancements of 200 K at the detection footpoints. We investigate the near-simultaneous flaring of EBs due to co-temporal flux emergence from a sunspot, which shows a decrease in transverse velocity when interacting with an existing, stationary area of opposite polarity magnetic flux, resulting in the formation of the EBs. We also show that these EBs can be fueled further by additional, faster moving, negative magnetic flux regions.
Soluble organic nutrient fluxes
Robert G. Qualls; Bruce L. Haines; Wayne Swank
2014-01-01
Our objectives in this study were (i) compare fluxes of the dissolved organic nutrients dissolved organic carbon (DOC), DON, and dissolved organic phosphorus (DOP) in a clearcut area and an adjacent mature reference area. (ii) determine whether concentrations of dissolved organic nutrients or inorganic nutrients were greater in clearcut areas than in reference areas,...
Radiation flux measuring device
International Nuclear Information System (INIS)
Corte, E.; Maitra, P.
1977-01-01
A radiation flux measuring device is described which employs a differential pair of transistors, the output of which is maintained constant, connected to a radiation detector. Means connected to the differential pair produce a signal representing the log of the a-c component of the radiation detector, thereby providing a signal representing the true root mean square logarithmic output. 3 claims, 2 figures
Edwards, P. G.; Protheroe, R. J.
1985-01-01
The result of a new calculation of the atmospheric muon and neutrino fluxes and the energy spectrum of muon-neutrinos produced in individual extensive air showers (EAS) initiated by proton and gamma-ray primaries is reported. Also explained is the possibility of detecting atmospheric nu sub mu's due to gamma-rays from these sources.
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...
International Nuclear Information System (INIS)
Besarati, Saeb M.; Yogi Goswami, D.; Stefanakos, Elias K.
2014-01-01
Highlights: • The HFLCAL method is used to find the flux distribution of individual heliostats. • An optimization algorithm is developed based on the principles of genetic algorithm. • The objective is to minimize the standard deviation of flux density distribution. • The optimization algorithm finds which heliostats should aim at which point. • By using the new algorithm the maximum flux is reduced by an order of magnitude. - Abstract: Temperature distribution on the receiver surface of a solar power tower plant is of great importance. High temperature gradients may lead to local hot spots and consequently failure of the receiver. The temperature distribution can be controlled by defining several aiming points on the receiver surface and adjusting the heliostats accordingly. In this paper, a new optimization algorithm which works based on the principles of genetic algorithm is developed to find the optimal flux distribution on the receiver surface. The objective is to minimize the standard deviation of the flux density distribution by changing the aiming points of individual heliostats. Flux distribution of each heliostat is found by using the HFLCAL model [1], which is validated against experimental data. The results show that after employing the new algorithm the maximum flux density is reduced by an order of magnitude. The effects of number of aiming points and size of the aiming surface on the flux density distribution are investigated in detail
Fully Consistent SIMPLE-Like Algorithms on Collocated Grids
DEFF Research Database (Denmark)
Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.
2015-01-01
To increase the convergence rate of SIMPLE-like algorithms on collocated grids, a compatibility condition between mass flux interpolation methods and SIMPLE-like algorithms is presented. Results of unsteady flow computations show that the SIMPLEC algorithm, when obeying the compatibility condition......, may obtain up to 35% higher convergence rate as compared to the standard SIMPLEC algorithm. Two new interpolation methods, fully compatible with the SIMPLEC algorithm, are presented and compared with some existing interpolation methods, including the standard methods of Choi [9] and Shen et al. [8...
Fourier transform and controlling of flux in scalar hysteresis measurement
International Nuclear Information System (INIS)
Kuczmann, Miklos
2008-01-01
The paper deals with a possible realization of eliminating the effect of noise in scalar hysteresis measurements. The measured signals have been transformed into the frequency domain, and, after applying digital filter, the spectrums of the filtered signals have been transformed back to the time domain. The proposed technique results in an accurate noise-removal algorithm. The paper illustrates a fast controlling algorithm applying the inverse of the actually measured hysteresis loop, and another proportional one to measure distorted flux pattern. By developing the mentioned algorithms, it aims at the controlling of a more complicated phenomena, i.e. measuring the vector hysteresis characteristics
ULY JUP COSPIN HIGH FLUX TELESCOPE HIGH RES. ION FLUX
National Aeronautics and Space Administration — This data set contains ion flux data recorded by the COSPIN High Flux Telescope (HFT) during the Ulysses Jupiter encounter 1992-Jan-25 to 1992-Feb-18.
A family of functions for mass and energy flux splitting of the Euler equations
Raga, A. C.; Cantó, J.
2009-12-01
Flux vector splitting algorithms for the Euler equations are based on dividing the mass, momentum and energy fluxes into a "forward directed flux" F+ and a "backward directed flux" F- (with F-=0 for Mach numbers M>1 and F+=0 for M<-1). van Leer (1979, 1982) [4,5] proposed using polynomials of the Mach number for computing F+ and F- in the subsonic regime, and derived the lowest order polynomials that satisfy a set of chosen criteria. In this paper, we explore the possibility of increasing the order of these polynomials, with the purpose of reducing the diffusion across slow moving contact discontinuities of the flux vector splitting algorithm. We find that a moderate reduction of the diffusion, resulting in sharper shocks and contact discontinuities, can indeed be obtained with the higher order polynomials for the split fluxes.
Directory of Open Access Journals (Sweden)
Gaisser Thomas K.
2015-01-01
Full Text Available This review of atmospheric muons and neutrinos emphasizes the high energy range relevant for backgrounds to high-energy neutrinos of astrophysical origin. After a brief historical introduction, the main distinguishing features of atmospheric νμ and νe are discussed, along with the implications of the muon charge ratio for the νµ / ν̅µ ratio. Methods to account for effects of the knee in the primary cosmic-ray spectrum and the energy-dependence of hadronic interactions on the neutrino fluxes are discussed and illustrated in the context of recent results from IceCube. A simple numerical/analytic method is proposed for systematic investigation of uncertainties in neutrino fluxes arising from uncertainties in the primary cosmic-ray spectrum/composition and hadronic interactions.
NEUTRON FLUX INTENSITY DETECTION
Russell, J.T.
1964-04-21
A method of measuring the instantaneous intensity of neutron flux in the core of a nuclear reactor is described. A target gas capable of being transmuted by neutron bombardment to a product having a resonance absorption line nt a particular microwave frequency is passed through the core of the reactor. Frequency-modulated microwave energy is passed through the target gas and the attenuation of the energy due to the formation of the transmuted product is measured. (AEC)
Physics of magnetic flux ropes
Russell, C. T.; Priest, E. R.; Lee, L. C.
The present work encompasses papers on the structure, waves, and instabilities of magnetic flux ropes (MFRs), photospheric flux tubes (PFTs), the structure and heating of coronal loops, solar prominences, coronal mass ejections and magnetic clouds, flux ropes in planetary ionospheres, the magnetopause, magnetospheric field-aligned currents and flux tubes, and the magnetotail. Attention is given to the equilibrium of MFRs, resistive instability, magnetic reconnection and turbulence in current sheets, dynamical effects and energy transport in intense flux tubes, waves in solar PFTs, twisted flux ropes in the solar corona, an electrodynamical model of solar flares, filament cooling and condensation in a sheared magnetic field, the magnetopause, the generation of twisted MFRs during magnetic reconnection, ionospheric flux ropes above the South Pole, substorms and MFR structures, evidence for flux ropes in the earth magnetotail, and MFRs in 3D MHD simulations.
Benchmarking gyrokinetic simulations in a toroidal flux-tube
Energy Technology Data Exchange (ETDEWEB)
Chen, Y.; Parker, S. E.; Wan, W. [University of Colorado at Boulder, Boulder, Colorado 80309 (United States); Bravenec, R. [Fourth-State Research, Austin, Texas 78704 (United States)
2013-09-15
A flux-tube model is implemented in the global turbulence code GEM [Y. Chen and S. E. Parker, J. Comput. Phys. 220, 839 (2007)] in order to facilitate benchmarking with Eulerian codes. The global GEM assumes the magnetic equilibrium to be completely given. The initial flux-tube implementation simply selects a radial location as the center of the flux-tube and a radial size of the flux-tube, sets all equilibrium quantities (B, ∇B, etc.) to be equal to the values at the center of the flux-tube, and retains only a linear radial profile of the safety factor needed for boundary conditions. This implementation shows disagreement with Eulerian codes in linear simulations. An alternative flux-tube model based on a complete local equilibrium solution of the Grad-Shafranov equation [J. Candy, Plasma Phys. Controlled Fusion 51, 105009 (2009)] is then implemented. This results in better agreement between Eulerian codes and the particle-in-cell (PIC) method. The PIC algorithm based on the v{sub ||}-formalism [J. Reynders, Ph.D. dissertation, Princeton University, 1992] and the gyrokinetic ion/fluid electron hybrid model with kinetic electron closure [Y. Chan and S. E. Parker, Phys. Plasmas 18, 055703 (2011)] are also implemented in the flux-tube geometry and compared with the direct method for both the ion temperature gradient driven modes and the kinetic ballooning modes.
Energy Technology Data Exchange (ETDEWEB)
Lhuillier, D. [Commissariat à l' Énergie Atomique et aux Énergies Alternatives, Centre de Saclay, IRFU/SPhN, 91191 Gif-sur-Yvette (France)
2013-02-15
The status of the prediction of reactor anti-neutrino spectra is presented. The most accurate method is still the conversion of total β spectra of fissionning isotopes as measured at research reactors. Recent re-evaluations of the conversion process led to an increased predicted flux by few percent and were at the origin of the so-called reactor anomaly. The up to date predictions are presented with their main sources of error. Perspectives are given on the complementary ab-initio predictions and upcoming experimental cross-checks of the predicted spectrum shape.
International Nuclear Information System (INIS)
Williams, D.J.
1990-01-01
Estimates are provided for the amount of methane emitted annually into the atmosphere in Australia for a variety of sources. The sources considered are coal mining, landfill, motor vehicles, natural gas suply system, rice paddies, bushfires, termites, wetland and animals. This assessment indicates that the major sources of methane are natural or agricultural in nature and therefore offer little scope for reduction. Nevertheless the remainder are not trival and reduction of these fluxes could play a significant part in any Australian action on the greenhouse problem. 19 refs., 7 tabs., 1 fig
Mazzolini, R G
2001-01-01
The author places Grmek's editorial within the flux of the historiographical debate which, since the middle of the 1970s, has concentrated on two major crises due to the end of social science-oriented 'scientific history' and to the 'linguistic turn'. He also argues that Grmek's historiographical work of the 1980s and 1990s was to some extent an alternative to certain observed changes in historical fashion and has achieved greater intelligibility because of its commitment to a rational vision of science and historiography.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....
Permanent magnet flux-biased magnetic actuator with flux feedback
Groom, Nelson J. (Inventor)
1991-01-01
The invention is a permanent magnet flux-biased magnetic actuator with flux feedback for adjustably suspending an element on a single axis. The magnetic actuator includes a pair of opposing electromagnets and provides bi-directional forces along the single axis to the suspended element. Permanent magnets in flux feedback loops from the opposing electromagnets establish a reference permanent magnet flux-bias to linearize the force characteristics of the electromagnets to extend the linear range of the actuator without the need for continuous bias currents in the electromagnets.
Gauge fluxes in F-theory compactifications
Energy Technology Data Exchange (ETDEWEB)
Lin, Ling
2016-07-13
In this thesis, we study the geometry and physics of gauge fluxes in F-theory compactifications to four dimensions. Motivated by the phenomenological requirement of chiral matter in realistic model building scenarios, we develop methods for a systematic analysis of primary vertical G{sub 4}-fluxes on torus-fibred Calabi-Yau fourfolds. In particular, we extend the well-known description of fluxes on elliptic fibrations with sections to the more general set-up of genus-one fibrations with multi-sections. The latter are known to give rise to discrete abelian symmetries in F-theory. We test our proposal for constructing fluxes in such geometries on an explicit model with SU(5) x Z{sub 2} symmetry, which is connected to an ordinary elliptic fibration with SU(5) x U(1) symmetry by a conifold transition. With our methods we systematically verify anomaly cancellation and tadpole matching in both models. Along the way, we find a novel way of understanding anomaly cancellation in 4D F-theory in purely geometric terms. This observation is further strengthened by a similar analysis of an SU(3) x SU(2) x U(1){sup 2} model. The obvious connection of this particular model with the Standard Model is then investigated in a more phenomenologically motivated survey. There, we will first provide possible matchings of the geometric spectrum with the Standard Model states, which highlights the role of the additional U(1) factor as a selection rule. In a second step, we then utilise our novel methods on flux computations to set up a search algorithm for semi-realistic chiral spectra in our Standard- Model-like fibrations over specific base manifolds B. As a demonstration, we scan over three choices P{sup 3}, Bl{sub 1}P{sup 3} and Bl{sub 2}P{sup 3} for the base. As a result we find a consistent flux that gives the chiral Standard Model spectrum with a vector-like triplet exotic, which may be lifted by a Higgs mechanism.
Directory of Open Access Journals (Sweden)
José Carlos Mendonça
2012-03-01
Full Text Available Neste estudo foram utilizadas imagens do sensor MODIS e o SEBAL na avaliação de duas proposições para a estimação do fluxo de calor sensível (H, baseadas na seleção dos pixels âncoras utilizados na determinação da diferença da temperatura à superfície (dT. Denominou-se H-CLÁSSICO, a proposição que utilizou pixels com temperaturas extremas, e H-PESAGRO, aquela que adotou o pixel frio para a menor temperatura e o pixel quente para o valor de H obtido como resíduo da equação de Penman-Monteith (FAO56, estimado com dados observados em uma estação agrometeorológica. Os resultados de H estimados pelas duas proposições foram comparados com valores de H obtidos pelo Balanço de Energia (Razão de Bowen sobre uma área cultivada com cana-de-açúcar. Com os resultados obtidos pode-se concluir que a proposição H-PESAGRO necessitou de um menor número de interações para a estabilização dos valores da resistência aerodinâmica (r ah e que os resultados, estimados com a proposição H-CLÁSSICA, apresentaram valores 58,35 % mais elevados do que os estimados pela H-PESAGRO. Quando comparados com os valores de H estimados pelo método da razão de Bowen sobre o pixel da cana-de-açúcar, os coeficientes de correlação foram r = 0,54 e r = 0,71, respectivamente, para as proposições H-CLÁSSICA e H-PESAGRO.Images from the MODIS and SEBAL algorithm were used to evaluate two proposals for estimating sensible heat flux (H based on the selection of anchor pixels used to determine the surface temperature difference (dT. The proposition in which pixels with extreme temperatures were used was called H-CLASSIC. The other one H-PESAGRO adopted for cold pixels the lowest temperature and for the hot pixels the value of H as a residue of the equation of Penman-Monteith FAO 56, using observed data from agrometeorological station. The results showed that the H-PESAGRO required a smaller number of interactions for the stabilization of the
Algorithm refinement for stochastic partial differential equations I. linear diffusion
Alexander, F J; Tartakovsky, D M
2002-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
In situ magnetotail magnetic flux calculation
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2015-06-01
Full Text Available We explore two new modifications of the magnetotail magnetic flux (F calculation algorithm based on the Petrinec and Russell (1996 (PR96 approach of the tail radius determination. Unlike in the PR96 model, the tail radius value is calculated at each time step based on simultaneous magnetotail and solar wind observations. Our former algorithm, described in Shukhtina et al. (2009, required that the "tail approximation" requirement were fulfilled, i.e., it could be applied only tailward x ∼ −15 RE. The new modifications take into account the approximate uniformity of the magnetic field of external sources in the near and middle tail. Tests, based on magnetohydrodynamics (MHD simulations, show that this approach may be applied at smaller distances, up to x ∼ −3 RE. The tests also show that the algorithm fails during long periods of strong positive interplanetary magnetic field (IMF Bz. A new empirical formula has also been obtained for the tail radius at the terminator (at x = 0 which improves the calculations.
In situ magnetotail magnetic flux calculation
Shukhtina, M. A.; Gordeev, E.
2015-06-01
We explore two new modifications of the magnetotail magnetic flux (F) calculation algorithm based on the Petrinec and Russell (1996) (PR96) approach of the tail radius determination. Unlike in the PR96 model, the tail radius value is calculated at each time step based on simultaneous magnetotail and solar wind observations. Our former algorithm, described in Shukhtina et al. (2009), required that the "tail approximation" requirement were fulfilled, i.e., it could be applied only tailward x ∼ -15 RE. The new modifications take into account the approximate uniformity of the magnetic field of external sources in the near and middle tail. Tests, based on magnetohydrodynamics (MHD) simulations, show that this approach may be applied at smaller distances, up to x ∼ -3 RE. The tests also show that the algorithm fails during long periods of strong positive interplanetary magnetic field (IMF) Bz. A new empirical formula has also been obtained for the tail radius at the terminator (at x = 0) which improves the calculations.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Reactor neutron flux measuring device
International Nuclear Information System (INIS)
Okutani, Yasushi; Hayakawa, Toshifumi.
1994-01-01
The present invention concerns a device for displaying an approximate neutron flux distribution to recognize the neutron flux distribution of the whole reactor in a short period of time. The device of the present invention displays, the results of measurement for neutron fluxes collected by a data collecting section on every results of the measurements at measuring points situating at horizontally identical positions of the reactor core. In addition, every results of the measurements at the measuring points situating at the identical height in the reactor core are accumulated, and the results of the integration are graphically displayed. With such procedures, the neutron flux distribution in the entire reactor is approximately displayed. Existent devices could not recognize the neutron flux distribution of the entire reactor at a glance and it took much time for the recognition. The device of the present invention can recognize the neutron flux distribution of the entire reactor in a short period of time. (I.S.)
Flux compactifications and generalized geometries
International Nuclear Information System (INIS)
Grana, Mariana
2006-01-01
Following the lectures given at CERN Winter School 2006, we present a pedagogical overview of flux compactifications and generalized geometries, concentrating on closed string fluxes in type II theories. We start by reviewing the supersymmetric flux configurations with maximally symmetric four-dimensional spaces. We then discuss the no-go theorems (and their evasion) for compactifications with fluxes. We analyse the resulting four-dimensional effective theories for Calabi-Yau and Calabi-Yau orientifold compactifications, concentrating on the flux-induced superpotentials. We discuss the generic mechanism of moduli stabilization and illustrate with two examples: the conifold in IIB and a T 6 /(Z 3 x Z 3 ) torus in IIA. We finish by studying the effective action and flux vacua for generalized geometries in the context of generalized complex geometry
Neutron fluxes in test reactors
Energy Technology Data Exchange (ETDEWEB)
Youinou, Gilles Jean-Michel [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-01-01
Communicate the fact that high-power water-cooled test reactors such as the Advanced Test Reactor (ATR), the High Flux Isotope Reactor (HFIR) or the Jules Horowitz Reactor (JHR) cannot provide fast flux levels as high as sodium-cooled fast test reactors. The memo first presents some basics physics considerations about neutron fluxes in test reactors and then uses ATR, HFIR and JHR as an illustration of the performance of modern high-power water-cooled test reactors.
Data Acquisition and Flux Calculations
DEFF Research Database (Denmark)
Rebmann, C.; Kolle, O; Heinesch, B
2012-01-01
In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation.......In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....
Heat Flux Instrumentation Laboratory (HFIL)
Federal Laboratory Consortium — Description: The Heat Flux Instrumentation Laboratory is used to develop advanced, flexible, thin film gauge instrumentation for the Air Force Research Laboratory....
KoFlux: Korean Regional Flux Network in AsiaFlux
Kim, J.
2002-12-01
AsiaFlux, the Asian arm of FLUXNET, held the Second International Workshop on Advanced Flux Network and Flux Evaluation in Jeju Island, Korea on 9-11 January 2002. In order to facilitate comprehensive Asia-wide studies of ecosystem fluxes, the meeting launched KoFlux, a new Korean regional network of long-term micrometeorological flux sites. For a successful assessment of carbon exchange between terrestrial ecosystems and the atmosphere, an accurate measurement of surface fluxes of energy and water is one of the prerequisites. During the 7th Global Energy and Water Cycle Experiment (GEWEX) Asian Monsoon Experiment (GAME) held in Nagoya, Japan on 1-2 October 2001, the Implementation Committee of the Coordinated Enhanced Observing Period (CEOP) was established. One of the immediate tasks of CEOP was and is to identify the reference sites to monitor energy and water fluxes over the Asian continent. Subsequently, to advance the regional and global network of these reference sites in the context of both FLUXNET and CEOP, the Korean flux community has re-organized the available resources to establish a new regional network, KoFlux. We have built up domestic network sites (equipped with wind profiler and radiosonde measurements) over deciduous and coniferous forests, urban and rural rice paddies and coastal farmland. As an outreach through collaborations with research groups in Japan, China and Thailand, we also proposed international flux sites at ecologically and climatologically important locations such as a prairie on the Tibetan plateau, tropical forest with mixed and rapid land use change in northern Thailand. Several sites in KoFlux already begun to accumulate interesting data and some highlights are presented at the meeting. The sciences generated by flux networks in other continents have proven the worthiness of a global array of micrometeorological flux towers. It is our intent that the launch of KoFlux would encourage other scientists to initiate and
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
Wind stress and heat fluxes over a Brazilian Coastal Upwelling
Dourado, Marcelo; Candella, Rogério
2017-04-01
Coastal upwelling zones have been intensively studied in the last decades especially due to their importance to the biological cycle. The coastal upwelling system of the Cabo Frio region (east coast of the Rio de Janeiro state, Brazil) keeps the surface water cold during most part of the year, what induces a stable atmospheric boundary layer associated to northeast winds. The main goal of this study is to investigate the wind stress and heat fluxes exchanges between the ocean and the atmosphere in that area. For this purpose, a set of hourly data meteorological and oceanographic data collected by a Wavescan metocean buoy anchored at 23o59S; 42oW, were used, as well as solar radiation and relative humidity from a terrestrial meteorological station from the Instituto Nacional de Meteorologia (InMet). COARE 3.0 algorithm was used to calculate the latent and sensible heat fluxes. In this discussion, positive values represent fluxes towards the ocean. The average net heat flux over our study period is 88 W m-2. The reduction of the net heat flux is due to the increase of the ocean latent heat loss, although a reduction in incoming shortwave radiation and an increase in ocean long wave cooling also contributes. The latent heat is 20 times larger than the sensible heat flux, but the mean value of the latent heat flux, 62 W m-2, is half the typical value found in open ocean. The temporal variability of both sensible and latent heat fluxes reflects their dependence on wind speed and air-sea temperature differences. When upwelling events, here periods when diurnal SST is lower than 18oC, are compared with undisturbed (without upwelling) events, it can be noted the sensible heat fluxes are positives and 10 times greater in magnitude. This is related to an increment, during these upwelling events, of the air-sea temperature difference and an increasing of the wind speed. The cold waters of the upwelling increase the air-sea temperature gradient and, also, the horizontal land
DEFF Research Database (Denmark)
Gonzalez-Franquesa, Alba; Patti, Mary-Elizabeth
2018-01-01
Merging transcriptomics or metabolomics data remains insufficient for metabolic flux estimation. Ramirez et al. integrate a genome-scale metabolic model with extracellular flux data to predict and validate metabolic differences between white and brown adipose tissue. This method allows both metab...
Nonequilibrium molecular dynamics theory, algorithms and applications
Todd, Billy D
2017-01-01
Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...
Stator Flux Observer for Induction Motor Based on Tracking Differentiator
Directory of Open Access Journals (Sweden)
Dafang Wang
2013-01-01
Full Text Available Voltage model is commonly used in direct torque control (DTC for flux observing of asynchronous motor. In order to improve low-speed and dynamic performance of the voltage model, a modified low-pass filter (LPF algorithm is proposed. Firstly, the tracking differentiator is brought in to modulate the measured stator current, which suppresses the measurement noise, and then amplitude and phase compensation is made towards the stator electromotive force (EMF, after which the stator flux is obtained through a low-pass filter. This method can eliminate the dynamic error of flux filtered by LPF and improve low-speed performance. Experimental results demonstrate effectiveness and improved dynamic performance of such method.
Flux surface shape and current profile optimization in tokamaks
International Nuclear Information System (INIS)
Dobrott, D.R.; Miller, R.L.
1977-01-01
Axisymmetric tokamak equilibria of noncircular cross section are analyzed numerically to study the effects of flux surface shape and current profile on ideal and resistive interchange stability. Various current profiles are examined for circles, ellipses, dees, and doublets. A numerical code separately analyzes stability in the neighborhood of the magnetic axis and in the remainder of the plasma using the criteria of Mercier and Glasser, Greene, and Johnson. Results are interpreted in terms of flux surface averaged quantities such as magnetic well, shear, and the spatial variation in the magnetic field energy density over the cross section. The maximum stable β is found to vary significantly with shape and current profile. For current profiles varying linearly with poloidal flux, the highest β's found were for doublets. Finally, an algorithm is presented which optimizes the current profile for circles and dees by making the plasma everywhere marginally stable
Principal Metabolic Flux Mode Analysis.
Bhadra, Sahely; Blomberg, Peter; Castillo, Sandra; Rousu, Juho; Wren, Jonathan
2018-02-06
In the analysis of metabolism, two distinct and complementary approaches are frequently used: Principal component analysis (PCA) and stoichiometric flux analysis. PCA is able to capture the main modes of variability in a set of experiments and does not make many prior assumptions about the data, but does not inherently take into account the flux mode structure of metabolism. Stoichiometric flux analysis methods, such as Flux Balance Analysis (FBA) and Elementary Mode Analysis, on the other hand, are able to capture the metabolic flux modes, however, they are primarily designed for the analysis of single samples at a time, and not best suited for exploratory analysis on a large sets of samples. We propose a new methodology for the analysis of metabolism, called Principal Metabolic Flux Mode Analysis (PMFA), which marries the PCA and stoichiometric flux analysis approaches in an elegant regularized optimization framework. In short, the method incorporates a variance maximization objective form PCA coupled with a stoichiometric regularizer, which penalizes projections that are far from any flux modes of the network. For interpretability, we also introduce a sparse variant of PMFA that favours flux modes that contain a small number of reactions. Our experiments demonstrate the versatility and capabilities of our methodology. The proposed method can be applied to genome-scale metabolic network in efficient way as PMFA does not enumerate elementary modes. In addition, the method is more robust on out-of-steady steady-state experimental data than competing flux mode analysis approaches. Matlab software for PMFA and SPMFA and data set used for experiments are available in https://github.com/aalto-ics-kepaco/PMFA. sahely@iitpkd.ac.in, juho.rousu@aalto.fi, Peter.Blomberg@vtt.fi, Sandra.Castillo@vtt.fi. Detailed results are in Supplementary files. Supplementary data are available at https://github.com/aalto-ics-kepaco/PMFA/blob/master/Results.zip.
Time-varying magnetotail magnetic flux calculation: a test of the method
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2009-04-01
Full Text Available We modified the Petrinec and Russell (1996 algorithm to allow the computation of time-varying magnetotail magnetic flux based on simultaneous spacecraft measurements in the magnetotail and near-Earth solar wind. In view of many assumptions made we tested the algorithm against MHD simulation in the artificial event, which provides the input from two artificial spacecraft to compute the magnetic flux F values with our algorithm; the latter are compared with flux values, obtained by direct integration in the tail cross-section. The comparison shows similar time variations of predicted and simulated fluxes as well as their good correlation (cc>0.9 for the input taken from the tail lobe, which somewhat degrades if using the "measurements" from the central plasma sheet. The regression relationship between the predicted and computed flux values is rather stable allowing one to correct the absolute value of predicted magnetic flux.
We conclude that this method is a perspective tool to monitor the tail magnetic flux which is one of the main global magnetotail parameters.
Time-varying magnetotail magnetic flux calculation: a test of the method
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2009-04-01
Full Text Available We modified the Petrinec and Russell (1996 algorithm to allow the computation of time-varying magnetotail magnetic flux based on simultaneous spacecraft measurements in the magnetotail and near-Earth solar wind. In view of many assumptions made we tested the algorithm against MHD simulation in the artificial event, which provides the input from two artificial spacecraft to compute the magnetic flux F values with our algorithm; the latter are compared with flux values, obtained by direct integration in the tail cross-section. The comparison shows similar time variations of predicted and simulated fluxes as well as their good correlation (cc>0.9 for the input taken from the tail lobe, which somewhat degrades if using the "measurements" from the central plasma sheet. The regression relationship between the predicted and computed flux values is rather stable allowing one to correct the absolute value of predicted magnetic flux. We conclude that this method is a perspective tool to monitor the tail magnetic flux which is one of the main global magnetotail parameters.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Parameter optimization for surface flux transport models
Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.
2017-11-01
Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Interpreting Flux from Broadband Photometry
Brown, Peter J.; Breeveld, Alice; Roming, Peter W. A.; Siegel, Michael
2016-10-01
We discuss the transformation of observed photometry into flux for the creation of spectral energy distributions (SED) and the computation of bolometric luminosities. We do this in the context of supernova studies, particularly as observed with the Swift spacecraft, but the concepts and techniques should be applicable to many other types of sources and wavelength regimes. Traditional methods of converting observed magnitudes to flux densities are not very accurate when applied to UV photometry. Common methods for extinction and the integration of pseudo-bolometric fluxes can also lead to inaccurate results. The sources of inaccuracy, though, also apply to other wavelengths. Because of the complicated nature of translating broadband photometry into monochromatic flux densities, comparison between observed photometry and a spectroscopic model is best done by forward modeling the spectrum into the count rates or magnitudes of the observations. We recommend that integrated flux measurements be made using a spectrum or SED which is consistent with the multi-band photometry rather than converting individual photometric measurements to flux densities, linearly interpolating between the points, and integrating. We also highlight some specific areas where the UV flux can be mischaracterized.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Digital Arithmetic: Division Algorithms
DEFF Research Database (Denmark)
Montuschi, Paolo; Nannarelli, Alberto
2017-01-01
implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Development of a Thermal Equilibrium Prediction Algorithm
International Nuclear Information System (INIS)
Aviles-Ramos, Cuauhtemoc
2002-01-01
A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)
Specification of ROP flux shape
Energy Technology Data Exchange (ETDEWEB)
Min, Byung Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Gray, A. [Atomic Energy of Canada Ltd., Chalk River, ON (Canada)
1997-06-01
The CANDU 9 480/SEU core uses 0.9% SEU (Slightly Enriched Uranium) fuel. The use f SEU fuel enables the reactor to increase the radial power form factor from 0.865, which is typical in current natural uranium CANDU reactors, to 0.97 in the nominal CANDU 9 480/SEU core. The difference is a 12% increase in reactor power. An additional 5% increase can be achieved due to a reduced refuelling ripple. The channel power limits were also increased by 3% for a total reactor power increase of 20%. This report describes the calculation of neutron flux distributions in the CANDU 9 480/SEU core under conditions specified by the C and I engineers. The RFSP code was used to calculate of neutron flux shapes for ROP analysis. Detailed flux values at numerous potential detector sites were calculated for each flux shape. (author). 6 tabs., 70 figs., 4 refs.
Notes on neutron flux measurement
International Nuclear Information System (INIS)
Alcala Ruiz, F.
1984-01-01
The main purpose of this work is to get an useful guide to carry out topical neutron flux measurements. Although the foil activation technique is used in the majority of the cases, other techniques, such as those based on fission chambers and self-powered neutron detectors, are also shown. Special interest is given to the description and application of corrections on the measurement of relative and absolute induced activities by several types of detectors (scintillators, G-M and gas proportional counters). The thermal arid epithermal neutron fluxes, as determined in this work, are conventional or effective (West cots fluxes), which are extensively used by the reactor experimentalists; however, we also give some expressions where they are related to the integrated neutron fluxes, which are used in neutron calculations. (Author) 16 refs
Conical electromagnetic radiation flux concentrator
Miller, E. R.
1972-01-01
Concentrator provides method of concentrating a beam of electromagnetic radiation into a smaller beam, presenting a higher flux density. Smaller beam may be made larger by sending radiation through the device in the reverse direction.
High Flux Isotope Reactor (HFIR)
Federal Laboratory Consortium — The HFIR at Oak Ridge National Laboratory is a light-water cooled and moderated reactor that is the United States’ highest flux reactor-based neutron source. HFIR...
Flux tubes at finite temperature
Energy Technology Data Exchange (ETDEWEB)
Cea, Paolo [INFN, Sezione di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Dipartimento di Fisica dell’Università di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Cosmai, Leonardo [INFN, Sezione di Bari,Via G. Amendola 173, I-70126 Bari (Italy); Cuteri, Francesca; Papa, Alessandro [Dipartimento di Fisica, Università della Calabria & INFN-Cosenza,Ponte Bucci, cubo 31C, I-87036 Rende (Cosenza) (Italy)
2016-06-07
The chromoelectric field generated by a static quark-antiquark pair, with its peculiar tube-like shape, can be nicely described, at zero temperature, within the dual superconductor scenario for the QCD confining vacuum. In this work we investigate, by lattice Monte Carlo simulations of the SU(3) pure gauge theory, the fate of chromoelectric flux tubes across the deconfinement transition. We find that, if the distance between the static sources is kept fixed at about 0.76 fm ≃1.6/√σ and the temperature is increased towards and above the deconfinement temperature T{sub c}, the amplitude of the field inside the flux tube gets smaller, while the shape of the flux tube does not vary appreciably across deconfinement. This scenario with flux-tube “evaporation” above T{sub c} has no correspondence in ordinary (type-II) superconductivity, where instead the transition to the phase with normal conductivity is characterized by a divergent fattening of flux tubes as the transition temperature is approached from below. We present also some evidence about the existence of flux-tube structures in the magnetic sector of the theory in the deconfined phase.
Directory of Open Access Journals (Sweden)
Chang-Seok Park
2017-09-01
Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.
Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Prediction of soil CO2 flux in sugarcane management systems using the Random Forest approach
Directory of Open Access Journals (Sweden)
Rose Luiza Moraes Tavares
Full Text Available ABSTRACT: The Random Forest algorithm is a data mining technique used for classifying attributes in order of importance to explain the variation in an attribute-target, as soil CO2 flux. This study aimed to identify prediction of soil CO2 flux variables in management systems of sugarcane through the machine-learning algorithm called Random Forest. Two different management areas of sugarcane in the state of São Paulo, Brazil, were selected: burned and green. In each area, we assembled a sampling grid with 81 georeferenced points to assess soil CO2 flux through automated portable soil gas chamber with measuring spectroscopy in the infrared during the dry season of 2011 and the rainy season of 2012. In addition, we sampled the soil to evaluate physical, chemical, and microbiological attributes. For data interpretation, we used the Random Forest algorithm, based on the combination of predicted decision trees (machine learning algorithms in which every tree depends on the values of a random vector sampled independently with the same distribution to all the trees of the forest. The results indicated that clay content in the soil was the most important attribute to explain the CO2 flux in the areas studied during the evaluated period. The use of the Random Forest algorithm originated a model with a good fit (R2 = 0.80 for predicted and observed values.
Study on torque algorithm of switched reluctance motor
Directory of Open Access Journals (Sweden)
Xiaoguang LI
2016-12-01
Full Text Available To solve the torque ripple problem of switched reluctance motor under the traditional control method, a direct torque control method for switched reluctance motor is proposed. Direct torque algorithm controls flux magnitude and direction by querying appropriate voltage vector in switch list. Taking torque as direct control variable can reduce the torque ripple of the motor, which broadens the application fields of switched reluctance motor. Starting with the theory of direct torque algorithm, based on MATLAB/Simulink platform, direct torque control and chopped current control system simulation model are designed. Under the condition that switched reluctance motor model and its load are consistent, it is compared with chopped current algorithm. At last, the feasibility of direct torque algorithm is verified through the platform of hardware experiments. It demonstrates that using direct torque algorithm can make the torque ripple be controlled effectively, which provides a wider application field for the switched reluctance motor.
Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models
Energy Technology Data Exchange (ETDEWEB)
Chudnovsky, V
2000-03-01
I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system.
Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models
International Nuclear Information System (INIS)
Chudnovsky, V.
2000-01-01
I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system
Physics of magnetic flux tubes
Ryutova, Margarita
2015-01-01
This book is the first account of the physics of magnetic flux tubes from their fundamental properties to collective phenomena in an ensembles of flux tubes. The physics of magnetic flux tubes is absolutely vital for understanding fundamental physical processes in the solar atmosphere shaped and governed by magnetic fields. High-resolution and high cadence observations from recent space and ground-based instruments taken simultaneously at different heights and temperatures not only show the ubiquity of filamentary structure formation but also allow to study how various events are interconnected by system of magnetic flux tubes. The book covers both theory and observations. Theoretical models presented in analytical and phenomenological forms are tailored for practical applications. These are welded with state-of-the-art observations from early decisive ones to the most recent data that open a new phase-space for exploring the Sun and sun-like stars. Concept of magnetic flux tubes is central to various magn...
CERES Fast Longwave And SHortwave Radiative Flux (FLASHFlux) Version4A.
Sawaengphokhai, P.; Stackhouse, P. W., Jr.; Kratz, D. P.; Gupta, S. K.
2017-12-01
The agricultural, renewable energy management, and science communities need global surface and top-of-atmosphere (TOA) radiative fluxes on a low latency basis. The Clouds and Earth's Radiant Energy System (CERES) FLASHFlux (Fast Longwave and SHortwave radiative Flux) data products address this need by enhancing the speed of CERES processing using simplified calibration and parameterized model of surface fluxes to provide a daily global radiative fluxes data set within one week of satellite observations. The CERES FLASHFlux provides two data products: 1) an overpass swath Level 2 Single Scanner Footprint (SSF) data products separately for both Aqua and Terra observations, and 2) a daily Level 3 Time Interpolated and Spatially Averaged (TISA) 1o x 1o gridded data that combines Aqua and Terra observations. The CERES FLASHFlux data product is being promoted to Version4A. Updates to FLASHFlux Version4A include a new cloud retrieval algorithm and an improved shortwave surface flux parameterization. We inter-compared FLASHFlux Version4A, FLASHFlux Version3C, CERES Edition 4 Syn1Deg and at the monthly scale CERES Edition4 EBAF (Energy Balanced and Filled) Top-of-Atmosphere and Edition 4 Surface EBAF fluxes to evaluate these improvements. We also analyze the impact of the new inputs and cloud algorithm to the surface shortwave and longwave radiative fluxes using ground sites measurement provided by CAVE (CERES/ARM Validation Experiment).
A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design
Energy Technology Data Exchange (ETDEWEB)
Sacco, Wagner F. [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil)], E-mail: wfsacco@iprj.uerj.br; Filho, Hermes Alves; Henderson, Nelio [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil); Oliveira, Cassiano R.E. de [Nuclear and Radiological Engineering Program, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States)
2008-05-15
A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications.
A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design
International Nuclear Information System (INIS)
Sacco, Wagner F.; Filho, Hermes Alves; Henderson, Nelio; Oliveira, Cassiano R.E. de
2008-01-01
A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications
Capabilities of VOS-based fluxes for estimating ocean heat budget and its variability
Gulev, S.; Belyaev, K.
2016-12-01
We consider here the perspective of using VOS observations by merchant ships available form the ICOADS data for estimating ocean surface heat budget at different time scale. To this purpose we compute surface turbulent heat fluxes as well as short- and long-wave radiative fluxes from the ICOADS reports for the last several decades in the North Atlantic mid latitudes. Turbulent fluxes were derived using COARE-3 algorithm and for computation of radiative fluxes new algorithms accounting for cloud types were used. Sampling uncertainties in the VOS-based fluxes were estimated by sub-sampling of the recomputed reanalysis (ERA-Interim) fluxes according to the VOS sampling scheme. For the turbulent heat fluxes we suggest an approach to minimize sampling uncertainties. The approach is based on the integration of the turbulent heat fluxes in the coordinates of steering parameters (vertical surface temperature and humidity gradients on one hand and wind speed on the other) for which theoretical probability distributions are known. For short-wave radiative fluxes sampling uncertainties were minimized by "rotating local observation time around the clock" and using probability density functions for the cloud cover occurrence distributions. Analysis was performed for the North Atlantic latitudinal band from 25 N to 60 N, for which also estimates of the meridional heat transport are available from the ocean cross-sections. Over the last 35 years turbulent fluxes within the region analysed increase by about 6 W/m2 with the major growth during the 1990s and early 2000s. Decreasing incoming short wave radiation during the same time (about 1 W/m2) implies upward change of the ocean surface heat loss by about 7-8 W/m2. We discuss different sources of uncertainties of computations as well as potential of the application of the analysis concept to longer time series going back to 1920s.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
Indian Academy of Sciences (India)
Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...
Introduction to Algorithms -14 ...
Indian Academy of Sciences (India)
As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...
Flux driven turbulence in tokamaks
International Nuclear Information System (INIS)
Garbet, X.; Ghendrih, P.; Ottaviani, M.; Sarazin, Y.; Beyer, P.; Benkadda, S.; Waltz, R.E.
1999-01-01
This work deals with tokamak plasma turbulence in the case where fluxes are fixed and profiles are allowed to fluctuate. These systems are intermittent. In particular, radially propagating fronts, are usually observed over a broad range of time and spatial scales. The existence of these fronts provide a way to understand the fast transport events sometimes observed in tokamaks. It is also shown that the confinement scaling law can still be of the gyroBohm type in spite of these large scale transport events. Some departure from the gyroBohm prediction is observed at low flux, i.e. when the gradients are close to the instability threshold. Finally, it is found that the diffusivity is not the same for a turbulence calculated at fixed flux than at fixed temperature gradient, with the same time averaged profile. (author)
Looking for high neutron fluxes
International Nuclear Information System (INIS)
Lengeler, Herbert
1994-01-01
The neutron is a powerful and versatile probe of both the structure and dynamics of condensed matter. However unlike other techniques such as X-ray, electron or light scattering, its interaction with matter is rather weak. Historically neutron scattering has always been intensity limited and scientists are always looking for more intense sources. These come in two kinds - fission reactors and spallation sources (in which neutrons are released from a target bombardment by beams). Unfortunately the power density of high flux reactors is approaching a technical limit and it will be difficult to achieve a large increase of neutron fluxes above typical present values as represented for example by the high flux reactor at ILL, Grenoble
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
DISCONNECTING OPEN SOLAR MAGNETIC FLUX
International Nuclear Information System (INIS)
DeForest, C. E.; Howard, T. A.; McComas, D. J.
2012-01-01
Disconnection of open magnetic flux by reconnection is required to balance the injection of open flux by coronal mass ejections and other eruptive events. Making use of recent advances in heliospheric background subtraction, we have imaged many abrupt disconnection events. These events produce dense plasma clouds whose distinctive shape can now be traced from the corona across the inner solar system via heliospheric imaging. The morphology of each initial event is characteristic of magnetic reconnection across a current sheet, and the newly disconnected flux takes the form of a 'U-'shaped loop that moves outward, accreting coronal and solar wind material. We analyzed one such event on 2008 December 18 as it formed and accelerated at 20 m s –2 to 320 km s –1 , thereafter expanding self-similarly until it exited our field of view 1.2 AU from the Sun. From acceleration and photometric mass estimates we derive the coronal magnetic field strength to be 8 μT, 6 R ☉ above the photosphere, and the entrained flux to be 1.6 × 10 11 Wb (1.6 × 10 19 Mx). We model the feature's propagation by balancing inferred magnetic tension force against accretion drag. This model is consistent with the feature's behavior and accepted solar wind parameters. By counting events over a 36 day window, we estimate a global event rate of 1 day –1 and a global solar minimum unsigned flux disconnection rate of 6 × 10 13 Wb yr –1 (6 × 10 21 Mx yr –1 ) by this mechanism. That rate corresponds to ∼ – 0.2 nT yr –1 change in the radial heliospheric field at 1 AU, indicating that the mechanism is important to the heliospheric flux balance.
The gradiometer flux qubit without an external flux bias
International Nuclear Information System (INIS)
Wu, C E; Liu, Y; Chi, C C
2006-01-01
We analyse the potential of the gradiometer flux qubit (GFQ), which should be insensitive to flux noise because of the nature of the gradiometer structure. However, to enjoy the benefit of such a design, we must be careful in choosing the initial condition. In the fluxoid quantization condition the flux integer n, which is set to zero in the usual single-loop flux qubit analysis, plays an important role in the GFQ potential. We found that it is impossible to construct a double-well potential if we choose the wrong initial condition. For a qubit application, n must be a small odd integer and the best choice would be n = 1. We also provide a precise and efficient numerical method for calculating the energy spectrum of the arbitrary GFQ potential; this will become useful in designing the circuitry parameters. The state control and read-out schemes are also optimized to a situation where a minimum requirement for using electronics is possible, which reduces noise from instruments directly
The flux database concerted action
International Nuclear Information System (INIS)
Mitchell, N.G.; Donnelly, C.E.
1999-01-01
This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)
A Hot Flux Rope Observed by SDO/AIA
Aparna, V.; Tripathi, Durgesh
2016-03-01
A filament eruption was observed on 2010 October 31 in the images recorded by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamic Observatory (SDO) in its Extreme Ultra-Violet (EUV) channels. The filament showed a slow-rise phase followed by a fast rise and was classified to be an asymmetric eruption. In addition, multiple localized brightenings which were spatially and temporally associated with the slow-rise phase were identified, leading us to believe that the tether-cutting mechanism initiated the eruption. An associated flux rope was detected in high-temperature channels of AIA, namely 94 and 131 Å, corresponding to 7 and 11 MK plasma respectively. In addition, these channels are also sensitive to cooler plasma corresponding to 1-2 MK. In this study, we have applied the algorithm devised by Warren et al. to remove cooler emission from the 94 Å channel to deduce only the high-temperature structure of the flux rope and to study its temporal evolution. We found that the flux rope was very clearly seen in the clean 94 Å channel image corresponding to Fe xviii emission, which corresponds to a plasma at a temperature of 7 MK. This temperature matched well with that obtained using Differential Emission Measure analysis. This study provides important constrains in the modeling of the thermodynamic structure of the flux ropes in coronal mass ejections.
Worm algorithm for the CPN−1 model
Directory of Open Access Journals (Sweden)
Tobias Rindlisbacher
2017-05-01
Full Text Available The CPN−1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CPN−1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CPN−1 model for N>2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CPN−1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CPN−1 lattice actions and exhibit marked differences in their approach to the continuum limit.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Generalized diffusion theory for calculating the neutron transport scalar flux
International Nuclear Information System (INIS)
Alcouffe, R.E.
1975-01-01
A generalization of the neutron diffusion equation is introduced, the solution of which is an accurate approximation to the transport scalar flux. In this generalization the auxiliary transport calculations of the system of interest are utilized to compute an accurate, pointwise diffusion coefficient. A procedure is specified to generate and improve this auxiliary information in a systematic way, leading to improvement in the calculated diffusion scalar flux. This improvement is shown to be contingent upon satisfying the condition of positive calculated-diffusion coefficients, and an algorithm that ensures this positivity is presented. The generalized diffusion theory is also shown to be compatible with conventional diffusion theory in the sense that the same methods and codes can be used to calculate a solution for both. The accuracy of the method compared to reference S/sub N/ transport calculations is demonstrated for a wide variety of examples. (U.S.)
Flux-weakening control methods for hybrid excitation synchronous motor
Directory of Open Access Journals (Sweden)
Mingming Huang
2015-09-01
Full Text Available The hybrid excitation synchronous motor (HESM, which aim at combining the advantages of permanent magnet motor and wound excitation motor, have the characteristics of low-speed high-torque hill climbing and wide speed range. Firstly, a new kind of HESM is presented in the paper, and its structure and mathematical model are illustrated. Then, based on a space voltage vector control, a novel flux-weakening method for speed adjustment in the high speed region is presented. The unique feature of the proposed control method is that the HESM driving system keeps the q-axis back-EMF components invariable during the flux-weakening operation process. Moreover, a copper loss minimization algorithm is adopted to reduce the copper loss of the HESM in the high speed region. Lastly, the proposed method is validated by the simulation and the experimental results.
Black branes in flux compactifications
Energy Technology Data Exchange (ETDEWEB)
Torroba, Gonzalo; Wang, Huajia
2013-10-01
We construct charged black branes in type IIA flux compactifications that are dual to (2 + 1)-dimensional field theories at finite density. The internal space is a general Calabi-Yau manifold with fluxes, with internal dimensions much smaller than the AdS radius. Gauge fields descend from the 3-form RR potential evaluated on harmonic forms of the Calabi-Yau, and Kaluza-Klein modes decouple. Black branes are described by a four-dimensional effective field theory that includes only a few light fields and is valid over a parametrically large range of scales. This effective theory determines the low energy dynamics, stability and thermodynamic properties. Tools from flux compactifications are also used to construct holographic CFTs with no relevant scalar operators, that can lead to symmetric phases of condensed matter systems stable to very low temperatures. The general formalism is illustrated with simple examples such as toroidal compactifications and manifolds with a single size modulus. We initiate the classification of holographic phases of matter described by flux compactifications, which include generalized Reissner-Nordstrom branes, nonsupersymmetric AdS_{2}×R^{2} and hyperscaling violating solutions.
High flux compact neutron generators
International Nuclear Information System (INIS)
Reijonen, J.; Lou, T.-P.; Tolmachoff, B.; Leung, K.-N.; Verbeke, J.; Vujic, J.
2001-01-01
Compact high flux neutron generators are developed at the Lawrence Berkeley National Laboratory. The neutron production is based on D-D or D-T reaction. The deuterium or tritium ions are produced from plasma using either a 2 MHz or 13.56 MHz radio frequency (RF) discharge. RF-discharge yields high fraction of atomic species in the beam which enables higher neutron output. In the first tube design, the ion beam is formed using a multiple hole accelerator column. The beam is accelerated to energy of 80 keV by means of a three-electrode extraction system. The ion beam then impinges on a titanium target where either the 2.4 MeV D-D or 14 MeV D-T neutrons are generated. The MCNP computation code has predicted a neutron flux of ∼10 11 n/s for the D-D reaction at beam intensity of 1.5 A at 150 kV. The neutron flux measurements of this tube design will be presented. Recently new compact high flux tubes are being developed which can be used for various applications. These tubes also utilize RF-discharge for plasma generation. The design of these tubes and the first measurements will be discussed in this presentation
Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.
2016-01-01
In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...
Helm, P.N.; van der Helm, P.N.; Huetink, Han; Akkerman, Remko
1998-01-01
A comparison is made between Arbitrary Lagrangian-Eulerian (ALE) finite element formulations for simulation of forming processes based on an artificial dissipation scheme and a limited flux scheme. The first ALE algorithm is based on an averaging procedure used in post-processing of finite element
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Directory of Open Access Journals (Sweden)
Yongquan Zhou
2014-01-01
Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Static Analysis Numerical Algorithms
2016-04-01
STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Image Segmentation Algorithms Overview
Yuheng, Song; Hao, Yan
2017-01-01
The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...
Directory of Open Access Journals (Sweden)
Yixiong Lu
2013-09-01
Full Text Available This study examines the modelled surface turbulent fluxes over sea ice from the bulk algorithms of the Beijing Climate Centre Climate System Model (BCC_CSM, the European Centre for Medium-Range Weather Forecasts (ECMWF model and the Community Earth System Model (CESM with data from the fourth Chinese National Arctic Research Expedition (CHINARE 2010 and the Surface Heat Budget of the Arctic Ocean (SHEBA experiment. Of all the model algorithms, wind stresses are replicated well and have small annual biases (−0.6% in BCC_CSM, 0.2% in CESM and 17% in ECMWF with observations, annual sensible heat fluxes are consistently underestimated by 83–141%, and annual latent heat fluxes are generally overestimated by 49–73%. Five sets of stability functions for stable stratification are evaluated based on theoretical and observational analyses, and the superior stability functions are employed in a new bulk algorithm proposal, which also features varying roughness lengths. Compared to BCC_CSM, the new algorithm can estimate the friction velocity with significantly reduced bias, 84% smaller in winter and 56% smaller in summer, respectively. For the sensible heat flux, the bias of the new algorithm is 30% smaller in winter and 19% smaller in summer than that of BCC_CSM. Finally, the bias of modelled latent heat fluxes is 27% smaller in summer.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Solar Modulation of Inner Trapped Belt Radiation Flux as a Function of Atmospheric Density
Lodhi, M. A. K.
2005-01-01
No simple algorithm seems to exist for calculating proton fluxes and lifetimes in the Earth's inner, trapped radiation belt throughout the solar cycle. Most models of the inner trapped belt in use depend upon AP8 which only describes the radiation environment at solar maximum and solar minimum in Cycle 20. One exception is NOAAPRO which incorporates flight data from the TIROS/NOAA polar orbiting spacecraft. The present study discloses yet another, simple formulation for approximating proton fluxes at any time in a given solar cycle, in particular between solar maximum and solar minimum. It is derived from AP8 using a regression algorithm technique from nuclear physics. From flux and its time integral fluence, one can then approximate dose rate and its time integral dose.
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Validation of Improved Broadband Shortwave and Longwave Fluxes Derived From GOES
Khaiyer, Mandana M.; Nordeen, Michele L.; Palikonda, Rabindra; Yi, Yuhong; Minnis, Patrick; Doelling, David R.
2009-01-01
Broadband (BB) shortwave (SW) and longwave (LW) fluxes at TOA (Top of Atmosphere) are crucial parameters in the study of climate and can be monitored over large portions of the Earth's surface using satellites. The VISST (Visible Infrared Solar Split-Window Technique) satellite retrieval algorithm facilitates derivation of these parameters from the Geostationery Operational Environmental Satellites (GOES). However, only narrowband (NB) fluxes are available from GOES, so this derivation requires use of narrowband-to-broadband (NB-BB) conversion coefficients. The accuracy of these coefficients affects the validity of the derived broadband (BB) fluxes. Most recently, NB-BB fits were re-derived using the NB fluxes from VISST/GOES data with BB fluxes observed by the CERES (Clouds and the Earth's Radiant Energy Budget) instrument aboard Terra, a sun-synchronous polar-orbiting satellite that crosses the equator at 10:30 LT. Subsequent comparison with ARM's (Atmospheric Radiation Measurement) BBHRP (Broadband Heating Rate Profile) BB fluxes revealed that while the derived broadband fluxes agreed well with CERES near the Terra overpass times, the accuracy of both LW and SW fluxes decreased farther away from the overpass times. Terra's orbit hampers the ability of the NB-BB fits to capture diurnal variability. To account for this in the LW, seasonal NB-BB fits are derived separately for day and night. Information from hourly SW BB fluxes from the Meteosat-8 Geostationary Earth Radiation Budget (GERB) is employed to include samples over the complete solar zenith angle (SZA) range sampled by Terra. The BB fluxes derived from these improved NB-BB fits are compared to BB fluxes computed with a radiative transfer model.
OptFlux: an open-source software platform for in silico metabolic engineering
Directory of Open Access Journals (Sweden)
Pinto José P
2010-04-01
Full Text Available Abstract Background Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. Results OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. Conclusions The OptFlux software is freely available, together with documentation and other resources, thus
OptFlux: an open-source software platform for in silico metabolic engineering.
Rocha, Isabel; Maia, Paulo; Evangelista, Pedro; Vilaça, Paulo; Soares, Simão; Pinto, José P; Nielsen, Jens; Patil, Kiran R; Ferreira, Eugénio C; Rocha, Miguel
2010-04-19
Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization
Flavour mixings in flux compactifications
International Nuclear Information System (INIS)
Buchmuller, Wilfried; Schweizer, Julian
2017-01-01
A multiplicity of quark-lepton families can naturally arise as zero-modes in flux compactifications. The flavour structure of quark and lepton mass matrices is then determined by the wave function profiles of the zero-modes. We consider a supersymmetric SO(10) x U(1) model in six dimensions compactified on the orbifold T 2 =Z 2 with Abelian magnetic flux. A bulk 16-plet charged under the U(1) provides the quark-lepton generations whereas two uncharged 10-plets yield two Higgs doublets. Bulk anomaly cancellation requires the presence of additional 16- and 10-plets. The corresponding zero-modes form vectorlike split multiplets that are needed to obtain a successful flavour phenomenology. We analyze the pattern of flavour mixings for the two heaviest families of the Standard Model and discuss possible generalizations to three and more generations.
International Nuclear Information System (INIS)
Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel
2016-01-01
Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.
Superconducting flux flow digital circuits
International Nuclear Information System (INIS)
Martens, J.S.; Zipperian, T.E.; Hietala, V.M.; Ginley, D.S.; Tigges, C.P.; Phillips, J.M.; Siegal, M.P.
1993-01-01
The authors have developed a family of digital logic circuits based on superconducting flux flow transistors that show high speed, reasonable signal levels, large fan-out, and large noise margins. The circuits are made from high-temperature superconductors (HTS) and have been shown to operate at over 90 K. NOR gates have been demonstrated with fan-outs of more than 5 and fully loaded switching times less than a fixture-limited 50 ps. Ring-oscillator data suggest inverter delay times of about 40ps when using a 3-μm linewidths. Simple flip-flops have also been demonstrated showing large noise margins, response times of less than 30 ps, and static power dissipation on the order of 30 nW. Among other uses, this logic family is appropriate as an interface between logic families such as single flux quantum and conventional semiconductor logic
Surface fluxes in heterogeneous landscape
Energy Technology Data Exchange (ETDEWEB)
Bay Hasager, C.
1997-01-01
The surface fluxes in homogeneous landscapes are calculated by similarity scaling principles. The methodology is well establish. In heterogeneous landscapes with spatial changes in the micro scale range, i e from 100 m to 10 km, advective effects are significant. The present work focus on these effects in an agricultural countryside typical for the midlatitudes. Meteorological and satellite data from a highly heterogeneous landscape in the Rhine Valley, Germany was collected in the large-scale field experiment TRACT (Transport of pollutants over complex terrain) in 1992. Classified satellite images, Landsat TM and ERS SAR, are used as basis for roughness maps. The roughnesses were measured at meteorological masts in the various cover classes and assigned pixel by pixel to the images. The roughness maps are aggregated, i e spatially averaged, into so-called effective roughness lengths. This calculation is performed by a micro scale aggregation model. The model solves the linearized atmospheric flow equations by a numerical (Fast Fourier Transform) method. This model also calculate maps of friction velocity and momentum flux pixel wise in heterogeneous landscapes. It is indicated how the aggregation methodology can be used to calculate the heat fluxes based on the relevant satellite data i e temperature and soil moisture information. (au) 10 tabs., 49 ills., 223 refs.
Neutron flux control systems validation
International Nuclear Information System (INIS)
Hascik, R.
2003-01-01
In nuclear installations main requirement is to obtain corresponding nuclear safety in all operation conditions. From the nuclear safety point of view is commissioning and start-up after reactor refuelling appropriate period for safety systems verification. In this paper, methodology, performance and results of neutron flux measurements systems validation is presented. Standard neutron flux measuring chains incorporated into the reactor protection and control system are used. Standard neutron flux measuring chain contains detector, preamplifier, wiring to data acquisition unit, data acquisition unit, wiring to display at control room and display at control room. During reactor outage only data acquisition unit and wiring and displaying at reactor control room is verified. It is impossible to verify detector, preamplifier and wiring to data acquisition recording unit during reactor refuelling according to low power. Adjustment and accurate functionality of these chains is confirmed by start-up rate (SUR) measurement during start-up tests after refuelling of the reactors. This measurement has direct impact to nuclear safety and increase operational nuclear safety level. Briefly description of each measuring system is given. Results are illustrated on measurements performed at Bohunice NPP during reactor start-up tests. Main failures and their elimination are described (Authors)
Determination of Energy Fluxes Over Agricultural Surfaces
Directory of Open Access Journals (Sweden)
Josefina Argete
1994-12-01
Full Text Available An energy budget was conducted over two kinds if surfaces: grass and corn canopy. The net radiative flux and the soil heat flux were directly measured while the latent and sensible heat flux were calculated from the vertical profiles if wet and dry-bulb temperature and wind speed. The crop storage flux was also estimated. Using the gradient or aerodynamic equations, the calculated fluxes when compared to the measured fluxes in the context of an energy budget gave an SEE = 63 Wm-2 over grass and SEE = 81 Wm-2 over corn canopy. The calculated fluxes compared reasonably well with those obtained using the Penman equations.For an energy budget research with limited instrumentation, the aerodynamic method performed satisfactorily in estimating the daytime fluxes, when atmospheric conditions are fully convective, but failed when conditions were stably stratified as during nighttime.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Wohlfahrt, Georg; Amelynck, Crist; Ammann, Christof; Arneth, Almut; Bamberger, Ines; Goldstein, Allen; Hansel, Armin; Heinesch, Bernhard; Holst, Thomas; Hörtnagl, Lukas; Karl, Thomas; Neftel, Albrecht; McKinney, Karena; Munger, William; Schade, Gunnar; Schoon, Niels
2014-05-01
Methanol (CH3OH) is, after methane, the second most abundant VOC in the troposphere and globally represents nearly 20% of the total biospheric VOC emissions. With typical concentrations of 1-10 ppb in the continental boundary layer, methanol plays a crucial role in atmospheric chemistry, which needs to be evaluated in the light of ongoing changes in land use and climate. Previous global methanol budgets have approached the net land flux by summing up the various emission terms (namely primary biogenic and anthropogenic emissions, plant decay and biomass burning) and by subtracting dry and wet deposition, resulting in a net land flux in the range of 75-245 Tg y-1. The data underlying these budget calculations largely stem from small-scale leaf gas exchange measurements and while recently column-integrated remotely sensed methanol concentrations have become available for constraining budget calculations, there have been few attempts to contrast model calculations with direct net ecosystem-scale methanol flux measurements. Here we use eddy covariance methanol flux measurements from 8 sites in Europe and North America to study the magnitude of and controls on the diurnal and seasonal variability in the net ecosystem methanol flux. In correspondence with leaf-level literature, our data show that methanol emission and its strong environmental and biotic control (by temperature and stomatal conductance) prevailed at the more productive (agricultural) sites and at a perturbed forest site. In contrast, at more natural, less productive sites substantial deposition of methanol occurred, in particular during periods of surface wetness. These deposition processes are poorly represented by currently available temperature/light and/or production-driven modelling algorithms. A new framework for modelling the bi-directional land-atmosphere methanol exchange is proposed which accounts for the production of methanol in leaves, the regulation of leaf methanol emission by stomatal
Space-Time Transformation in Flux-form Semi-Lagrangian Schemes
Directory of Open Access Journals (Sweden)
Peter C. Chu Chenwu Fan
2010-01-01
Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.
DEFF Research Database (Denmark)
Hu, Jiefeng; Zhu, Jianguo; Qu, Yanqing
2013-01-01
Voltage and frequency droop method is commonly used in microgrids to achieve proper autonomous power sharing without rely on intercommunication systems. This paper proposes a new control strategy for parallel connected inverters in microgrid applications by drooping the flux instead of the inverter....... In addition, a small- signal model is developed in order to design the main control parameters and study the system dynamics and stability. The proposed control scheme includes a direct flux control (DFC) algorithm, which avoids the use of PI controllers and PWM modulators. Furthermore, in order to reduce...... the flux ripple, a model predictive control (MPC) scheme is integrated into the DFC. The obtained results shows that the proposed flux droop strategy can achieve active and reactive power sharing with much lower frequency deviation and better transient performance than the conventional droop method, thus...
Numerical Simulations of a Flux Rope Ejection
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... Coronal mass ejections (CMEs) are the most violent phenomena observed on the Sun. One of the most successful models to explain CMEs is the flux rope ejection model, where a magnetic flux rope is expelled from the solar corona after a long phase along which the flux rope stays in equilibrium while ...
Surface fluxes over natural landscapes using scintillometry
Meijninger, W.M.L.
2003-01-01
Motivated by the demand for reliable area-averaged fluxes associated with natural landscapes this thesis investigates a relative new measurement technique known as the scintillation method. For homogeneous areas the surface fluxes can be derived with reasonable accuracy. However, fluxes
Models of Flux Tubes from Constrained Relaxation
Indian Academy of Sciences (India)
tribpo
Equilibria corresponding to the energy extrema while conserving these invariants for parallel flows yield three classes of ... parallel heat flux, due to the boundary condition Β · n = 0, that the total energy, is conserved. In all HR, K, S, and the total mass, ... Zero net current flux tubes are qualitatively similar to the flux tube with ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Wireless communications algorithmic techniques
Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A
2013-01-01
This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
The Application of New Optical Meteor Flux Routines to the 2014 May Camelopardalid Outburst
Blaauw, Rhiannon; Campbell-Brown, Margaret; Kingery, Aaron
2015-01-01
NASA's Meteoroid Environment Office (MEO) is charged with monitoring the meteoroid environment in near-Earth space for the protection of satellites and spacecraft. The MEO has recently established eight wide-field meteor cameras, four cameras each at two separate stations to calculate automated meteor fluxes in the millimeter size range. Each camera consists of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 15.5 degree field of view. This configuration has a limiting meteor magnitude of about +5. One station is located at Marshall Space Flight Center in Huntsville, Alabama and the other is 31.8 kilometers away at a school in Decatur, Alabama. Both single-station and double-station fluxes are calculated every morning using data from the previous night. The flux algorithms employed here differ from others currently in use in that they do not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the position of the active shower or sporadic source radiant. The flux per height interval is calculated and summed to obtain the total meteor flux. As the mass is also computed from the photometry, a mass flux can also be calculated. First, a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes versus camera intensities. The stellar limiting magnitude is derived from this and converted to a limiting meteor magnitude for the active shower or sporadic source. The fluxes are scaled to an average limiting magnitude throughout the night and zenithal hourly rate (ZHR's) are output daily along with flux values. In addition to this process, results will be presented as applied to the 2014 May Camelopardalid outburst, using data from several
Directory of Open Access Journals (Sweden)
Tyler W. H. Backman
2018-01-01
Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
Flux of Cadmium through Euphausiids
International Nuclear Information System (INIS)
Benayoun, G.; Fowler, S.W.; Oregioni, B.
1976-01-01
Flux of the heavy metal cadmium through the euphausiid Meganyctiphanes norvegica was examined. Radiotracer experiments showed that cadmium can be accumulated either directly from water or through the food chain. When comparing equilibrium cadmium concentration factors based on stable element measurements with those obtained from radiotracer experiments, it is evident that exchange between cadmium in the water and that in euphausiid tissue is a relatively slow process, indicating that, in the long term, ingestion of cadmium will probably be the more important route for the accumulation of this metal. Approximately 10% of cadmium ingested by euphausiids was incorporated into internal tissues when the food source was radioactive Artemia. After 1 month cadmium, accumulated directly from water, was found to be most concentrated in the viscera with lesser amounts in eyes, exoskeleton and muscle, respectively. Use of a simple model, based on the assumption that cadmium taken in by the organism must equal cadmium released plus that accumulated in tissue, allowed assessment of the relative importance of various metabolic parameters in controlling the cadmium flux through euphausiids. Fecal pellets, due to their relatively high rate of production and high cadmium content, accounted for 84% of the total cadmium flux through M. norvegica. Comparisons of stable cadmium concentrations in natural euphausiid food and the organism's resultant fecal pellets indicate that the cadmium concentration in ingested material was increased nearly 5-fold during its passage through the euphausiid. From comparisons of all routes by which cadmium can be released from M. norvegica to the water column, it is concluded that fecal pellet deposition represents the principal mechanism effecting the downward vertical transport of cadmium by this species. (author)
International Nuclear Information System (INIS)
Wiegand, W.J. Jr.; Bullis, R.H.; Mongeon, R.J.
1980-01-01
A flowmeter based on ion drift techniques was developed for measuring the rate of flow of a fluid through a given cross-section. Ion collectors are positioned on each side of an immediately adjacent to ion source. When air flows axially through the region in which ions are produced and appropriate electric fields are maintained between the collectors, an electric current flows to each collector due to the net motion of the ions. The electric currents and voltages and other parameters which define the flow are combined in an electric circuit so that the flux of the fluid can be determined. (DN)
Ball, Stanley
1986-01-01
Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)
Ferguson, David L.; Henderson, Peter B.
1987-01-01
Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…
Improved Approximation Algorithm for
Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz
2014-01-01
We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...
Algorithms for SCC Decomposition
J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)
2008-01-01
htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Tokamak disruption heat flux simulator
International Nuclear Information System (INIS)
Langhoff, M.; Hess, G.; Gahl, J.; Ingram, R.
1990-01-01
A coaxial plasma gun system, operating in the deflagration mode, has been built and fired at the University of New Mexico. This system, powered by a 100 kJ capacitor bank, was designed to give a variable pulse length of approximately 50-100 us. The gun is intended to deliver to a target an energy deposition density of 1 kJ per cm 2 via impact with a deuterium plasma possessing a highly directed energy. This system should simulate on the target, over an area of approximately 10 cm 2 , the heat flux of a tokamak plasma disruption on plasma facing components. Current diagnostics for the system are rather rudimentary but sufficient for determination of plasma pulse characteristics and energy transfer to target. Electrical measurements include bank voltage measured via resistive voltage dividers, and bank current measured via Rogowski coil. The shape of the plasma, its position relative to the target area, and the final impact area, is determined via open-shutter photography and the use of witness plates. Total energy deposited onto targets will be determined through simple calorimetry and careful target mass measurements. Preliminary results describing the ablation of carbon targets exposed to disruption like heat fluxes will be presented as well as a description of the experimental apparatus
Neutron flux enhancement at LASREF
International Nuclear Information System (INIS)
Sommer, W.F.; Ferguson, P.D.; Wechsler, M.S.
1991-01-01
The accelerator at the Los Alamos Meson Physics Facility produces a 1-mA beam of protons at an energy of 800 MeV. Since 1985, the Los Alamos Spallation Radiation Effects Facility (LASREF) has made use of the neutron flux that is generated as the incident protons interact with the nuclei in targets and a copper beam stop. A variety of basic and applied experiments in radiation damage and radiation effects have been completed. Recent studies indicate that the flux at LASREF can be increased by at least a factor of ten from the present level of about 5 E+17 m -2 s -1 . This requires changing the beam-stop material from Cu to W and optimizing the geometry of the beam-target interaction region. These studies are motivated by the need for a large volume, high energy, and high intensity neutron source in the development of materials for advanced energy concepts such as fusion reactors. 18 refs., 7 figs., 2 tabs
Fast autodidactic adaptive equalization algorithms
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
A MEDLINE categorization algorithm
Directory of Open Access Journals (Sweden)
Gehanno Jean-Francois
2006-02-01
Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
YANA – a software tool for analyzing flux modes, gene-expression and enzyme activities
Directory of Open Access Journals (Sweden)
Engels Bernd
2005-06-01
Full Text Available Abstract Background A number of algorithms for steady state analysis of metabolic networks have been developed over the years. Of these, Elementary Mode Analysis (EMA has proven especially useful. Despite its low user-friendliness, METATOOL as a reliable high-performance implementation of the algorithm has been the instrument of choice up to now. As reported here, the analysis of metabolic networks has been improved by an editor and analyzer of metabolic flux modes. Analysis routines for expression levels and the most central, well connected metabolites and their metabolic connections are of particular interest. Results YANA features a platform-independent, dedicated toolbox for metabolic networks with a graphical user interface to calculate (integrating METATOOL, edit (including support for the SBML format, visualize, centralize, and compare elementary flux modes. Further, YANA calculates expected flux distributions for a given Elementary Mode (EM activity pattern and vice versa. Moreover, a dissection algorithm, a centralization algorithm, and an average diameter routine can be used to simplify and analyze complex networks. Proteomics or gene expression data give a rough indication of some individual enzyme activities, whereas the complete flux distribution in the network is often not known. As such data are noisy, YANA features a fast evolutionary algorithm (EA for the prediction of EM activities with minimum error, including alerts for inconsistent experimental data. We offer the possibility to include further known constraints (e.g. growth constraints in the EA calculation process. The redox metabolism around glutathione reductase serves as an illustration example. All software and documentation are available for download at http://yana.bioapps.biozentrum.uni-wuerzburg.de. Conclusion A graphical toolbox and an editor for METATOOL as well as a series of additional routines for metabolic network analyses constitute a new user
Methods and applications in high flux neutron imaging
International Nuclear Information System (INIS)
Ballhausen, H.
2007-01-01
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Smirnov, A; Alekseev, G [SI ' Arctic and Antarctic Research Institute' , St. Petersburg (Russian Federation); Korablev, A; Esau, I, E-mail: avsmir@aari.nw.r [Nansen Environmental and Remote Sensing Centre, Bergen (Norway)
2010-08-15
The Nordic Seas are an important area of the World Ocean where warm Atlantic waters penetrate far north forming the mild climate of Northern Europe. These waters represent the northern rim of the global thermohaline circulation. Estimates of the relationships between the net heat flux and mixed layer properties in the Nordic Seas are examined. Oceanographic data are derived from the Oceanographic Data Base (ODB) compiled in the Arctic and Antarctic Research Institute. Ocean weather ship 'Mike' (OWS) data are used to calculate radiative and turbulent components of the net heat flux. The net shortwave flux was calculated using a satellite albedo dataset and the EPA model. The net longwave flux was estimated by Southampton Oceanography Centre (SOC) method. Turbulent fluxes at the air-sea interface were calculated using the COARE 3.0 algorithm. The net heat flux was calculated by using oceanographic and meteorological data of the OWS 'Mike'. The mixed layer depth was estimated for the period since 2002 until 2009 by the 'Mike' data as well. A good correlation between these two parameters has been found. Sensible and latent heat fluxes controlled by surface air temperature/sea surface temperature gradient are the main contributors into net heat flux. Significant correlation was found between heat fluxes variations at the OWS 'Mike' location and sea ice export from the Arctic Ocean.
Genetic Algorithms and Local Search
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090
International Nuclear Information System (INIS)
Haghighat, A.; Lawrence, R.D.
1989-01-01
Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution
Algorithms for Global Positioning
DEFF Research Database (Denmark)
Borre, Kai; Strang, Gilbert
and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Fatigue Evaluation Algorithms: Review
DEFF Research Database (Denmark)
Passipoularidis, Vaggelis; Brøndsted, Povl
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...
Carbon dioxide, water vapour and energy fluxes over a semi ...
Indian Academy of Sciences (India)
42
vapour fluxes in Mangrove ecosystems, Sundarbans (India). The above observations are. 57 .... with the help of PAR. 115 sensor. Soil heat flux plates were used for the measurement of soil heat flux. ..... where Rn is net radiation, G is the soil heat flux, H is sensible heat flux and LE is the latent. 233 heat flux. 234. We have ...
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Likelihood Inflating Sampling Algorithm
Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.
2016-01-01
Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...
ALGORITHM OF OBJECT RECOGNITION
Directory of Open Access Journals (Sweden)
Loktev Alexey Alexeevich
2012-10-01
Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
Energy Technology Data Exchange (ETDEWEB)
COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST
2000-07-19
Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
An efficient algorithm for function optimization: modified stem cells algorithm
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Pyrolytic graphite gauge for measuring heat flux
Bunker, Robert C. (Inventor); Ewing, Mark E. (Inventor); Shipley, John L. (Inventor)
2002-01-01
A gauge for measuring heat flux, especially heat flux encountered in a high temperature environment, is provided. The gauge includes at least one thermocouple and an anisotropic pyrolytic graphite body that covers at least part of, and optionally encases the thermocouple. Heat flux is incident on the anisotropic pyrolytic graphite body by arranging the gauge so that the gauge surface on which convective and radiative fluxes are incident is perpendicular to the basal planes of the pyrolytic graphite. The conductivity of the pyrolytic graphite permits energy, transferred into the pyrolytic graphite body in the form of heat flux on the incident (or facing) surface, to be quickly distributed through the entire pyrolytic graphite body, resulting in small substantially instantaneous temperature gradients. Temperature changes to the body can thereby be measured by the thermocouple, and reduced to quantify the heat flux incident to the body.
Local rectification of heat flux
Pons, M.; Cui, Y. Y.; Ruschhaupt, A.; Simón, M. A.; Muga, J. G.
2017-09-01
We present a chain-of-atoms model where heat is rectified, with different fluxes from the hot to the cold baths located at the chain boundaries when the temperature bias is reversed. The chain is homogeneous except for boundary effects and a local modification of the interactions at one site, the “impurity”. The rectification mechanism is due here to the localized impurity, the only asymmetrical element of the structure, apart from the externally imposed temperature bias, and does not rely on putting in contact different materials or other known mechanisms such as grading or long-range interactions. The effect survives if all interaction forces are linear except the ones for the impurity.
Nuclear transmutation by flux compression
International Nuclear Information System (INIS)
Seifritz, W.
2001-01-01
A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de
Insects, infestations and nutrient fluxes
Michalzik, B.
2012-04-01
Forest ecosystems are characterized by a high temporal and spatial variability in the vertical transfer of energy and matter within the canopy and the soil compartment. The mechanisms and controlling factors behind canopy processes and system-internal transfer dynamics are imperfectly understood at the moment. Seasonal flux diversities and inhomogeneities in throughfall composition have been reported from coniferous and deciduous forests, and in most cases leaf leaching has been considered as principle driver for differences in the amount and quality of nutrients and organic compounds (Tukey and Morgan 1963). Since herbivorous insects and the processes they initiate received less attention in past times, ecologists now emphasize the need for linking biological processes occurring in different ecosystem strata to explain rates and variability of nutrient cycling (Bardgett et al. 1998, Wardle et al. 2004). Consequently, herbivore insects in the canopies of forests are increasingly identified to play an important role for the (re)cycling and availability of nutrients, or, more generally, for the functioning of ecosystems not only in outbreak situations but also at endemic (non-outbreak) density levels (Stadler et al. 2001, Hunter et al. 2003). Before, little attention was paid to insect herbivores when quantifying element and energy fluxes through ecosystems, although the numerous and different functions insects fulfill in ecosystems (e.g. as pollinators, herbivores or detritivores) were unanimously recognized (Schowalter 2000). Amongst the reasons for this restraint was the argument that the total biomass of insects tends to be relatively low compared to the biomass of trees or the pool of soil organic matter (Ohmart et al. 1983). A second argument which was put forward to justify the inferior role of insects in nutrient cycling were the supposed low defoliation losses between 5-10% of the annual leaf biomass, or net primary production, due to insect herbivory under
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Heat Flux Inhibition by Whistlers: Experimental Confirmation
International Nuclear Information System (INIS)
Eichler, D.
2002-01-01
Heat flux in weakly magnetized collisionless plasma is, according to theoretical predictions, limited by whistler turbulence that is generated by heat flux instabilities near threshold. Observations of solar wind electrons by Gary and coworkers appear to confirm the limit on heat flux as being roughly the product of the magnetic energy density and the electron thermal velocity, in agreement with prediction (Pistinner and Eichler 1998)
Study on characteristic points of boiling curve by using wavelet analysis and genetic algorithm
International Nuclear Information System (INIS)
Wei Huiming; Su Guanghui; Qiu Suizheng; Yang Xingbo
2009-01-01
Based on the wavelet analysis theory of signal singularity detection,the critical heat flux (CHF) and minimum film boiling starting point (q min ) of boiling curves can be detected and analyzed by using the wavelet multi-resolution analysis. To predict the CHF in engineering, empirical relations were obtained based on genetic algorithm. The results of wavelet detection and genetic algorithm prediction are consistent with experimental data very well. (authors)
Iterative Algorithms for Nonexpansive Mappings
Directory of Open Access Journals (Sweden)
Yao Yonghong
2008-01-01
Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .
Gao, Nuo; Zhu, S. A.; He, Bin
2006-06-01
We have developed a new magnetic resonance electrical impedance tomography (MREIT) algorithm, the RSM-MREIT algorithm, for noninvasive imaging of the electrical conductivity distribution using only one component of magnetic flux density. The proposed RSM-MREIT algorithm uses the response surface methodology (RSM) algorithm for optimizing the conductivity distribution through minimizing the errors between the measured and calculated magnetic flux densities. A series of computer simulations has been conducted to assess the performance of the proposed RSM-MREIT algorithm to estimate electrical conductivity values of the scalp, the skull and the brain tissue, in a three-shell piecewise homogeneous head model. Computer simulation studies were conducted in both a spherical and realistic-geometry head model with a single variable (the brain-to-skull conductivity ratio) and three variables (the conductivity of the brain, the skull, and the scalp). The relative error between the target and estimated head conductivity values was less than 12% for both the single-variable and three-variable simulations. These promising simulation results demonstrate the feasibility of the proposed RSM-MREIT algorithm in estimating electrical conductivity values in a piecewise homogeneous head model of the human head, and suggest that the RSM-MREIT algorithm merits further investigation.
Dimensional reduction of a generalized flux problem
International Nuclear Information System (INIS)
Moroz, A.
1992-01-01
In this paper, a generalized flux problem with Abelian and non-Abelian fluxes is considered. In the Abelian case we shall show that the generalized flux problem for tight-binding models of noninteracting electrons on either 2n- or (2n + 1)-dimensional lattice can always be reduced to an n-dimensional hopping problem. A residual freedom in this reduction enables one to identify equivalence classes of hopping Hamiltonians which have the same spectrum. In the non-Abelian case, the reduction is not possible in general unless the flux tensor factorizes into an Abelian one times are element of the corresponding algebra
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
Parallel Architectures and Bioinspired Algorithms
Pérez, José; Lanchares, Juan
2012-01-01
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Recent results on howard's algorithm
DEFF Research Database (Denmark)
Miltersen, P.B.
2012-01-01
Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
Plataniotis K. N.
1997-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
The Great Deluge Algorithm applied to a nuclear reactor core design optimization problem
International Nuclear Information System (INIS)
Sacco, Wagner F.; Oliveira, Cassiano R.E. de
2005-01-01
The Great Deluge Algorithm (GDA) is a local search algorithm introduced by Dueck. It is an analogy with a flood: the 'water level' rises continuously and the proposed solution must lie above the 'surface' in order to survive. The crucial parameter is the 'rain speed', which controls convergence of the algorithm similarly to Simulated Annealing's annealing schedule. This algorithm is applied to the reactor core design optimization problem, which consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. This problem was previously attacked by the canonical genetic algorithm (GA) and by a Niching Genetic Algorithm (NGA). NGAs were designed to force the genetic algorithm to maintain a heterogeneous population throughout the evolutionary process, avoiding the phenomenon known as genetic drift, where all the individuals converge to a single solution. The results obtained by the Great Deluge Algorithm are compared to those obtained by both algorithms mentioned above. The three algorithms are submitted to the same computational effort and GDA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. One of the great advantages of this algorithm over the GA is that it does not require special operators for discrete optimization. (author)
Self-organization in magnetic flux ropes
Lukin, Vyacheslav S.
2014-06-01
This cross-disciplinary special issue on 'Self-organization in magnetic flux ropes' follows in the footsteps of another collection of manuscripts dedicated to the subject of magnetic flux ropes, a volume on 'Physics of magnetic flux ropes' published in the American Geophysical Union's Geophysical Monograph Series in 1990 [1]. Twenty-four years later, this special issue, composed of invited original contributions highlighting ongoing research on the physics of magnetic flux ropes in astrophysical, space and laboratory plasmas, can be considered an update on our state of understanding of this fundamental constituent of any magnetized plasma. Furthermore, by inviting contributions from research groups focused on the study of the origins and properties of magnetic flux ropes in a variety of different environments, we have attempted to underline both the diversity of and the commonalities among magnetic flux ropes throughout the solar system and, indeed, the universe. So, what is a magnetic flux rope? The answer will undoubtedly depend on whom you ask. A flux rope can be as narrow as a few Larmor radii and as wide as the Sun (see, e.g., the contributions by Heli Hietala et al and by Angelous Vourlidas). As described below by Ward Manchester IV et al , they can stretch from the Sun to the Earth in the form of interplanetary coronal mass ejections. Or, as in the Swarthmore Spheromak Experiment described by David Schaffner et al , they can fit into a meter-long laboratory device tended by college students. They can be helical and line-tied (see, e.g., Walter Gekelman et al or J Sears et al ), or toroidal and periodic (see, e.g., John O'Bryan et al or Philippa Browning et al ). They can form in the low plasma beta environment of the solar corona (Tibor Török et al ), the order unity beta plasmas of the solar wind (Stefan Eriksson et al ) and the plasma pressure dominated stellar convection zones (Nicholas Nelson and Mark Miesch). In this special issue, Setthivoine You
Yousefi-Talouki, Arzhang; Pescetto, Paolo; Pellegrino, Gian-Mario Luigi
2017-01-01
This paper proposes a sensorless direct flux vector control scheme for synchronous reluctance motor drives. Torque is controlled at constant switching frequency, via the closed loop regulation of the stator flux linkage vector and of the current component in quadrature with it, using the stator flux oriented reference frame. A hybrid flux and position observer combines back-electromotive force integration with pulsating voltage injection around zero speed. Around zero speed, the position obse...
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...
OptFlux: an open-source software platform for in silico metabolic engineering
DEFF Research Database (Denmark)
Rocha, I.; Maia, P.; Evangelista, P.
2010-01-01
software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed Opt...... to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. Results: OptFlux is an open-source and modular...... algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition...
Differential harmony search algorithm to optimize PWRs loading pattern
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Exploit of DHS algorithm in LP optimization reveals its flexibility, robustness and reliability. ► Upshot of our experiments with DHS shows that the search approach to optimal LP is quickly. ► On the average, the final band width of DHS fitness values is narrow relative to HS and GHS. -- Abstract: The objective of this work is to develop a core loading optimization technique using differential harmony search algorithm in the context of obtaining an optimal configuration of fuel assemblies in pressurized water reactors. To implement and evaluate the proposed technique, differential harmony search nodal expansion package for 2-D geometry, DHSNEP-2D, is developed. The package includes two modules; in the first modules differential harmony search (DHS) is implemented and nodal expansion code which solves two dimensional-multi group neutron diffusion equations using fourth degree flux expansion with one node per a fuel assembly is in the second module. For evaluation of DHS algorithm, classical harmony search (HS) and global-best harmony search (GHS) algorithms are also included in DHSNEP-2D in order to compare the outcome of techniques together. For this purpose, two PWR test cases have been investigated to demonstrate the DHS algorithm capability in obtaining near optimal loading pattern. Results show that the convergence rate of DHS and execution times are quite promising and also is reliable for the fuel management operation. Moreover, numerical results show the good performance of DHS relative to other competitive algorithms such as genetic algorithm (GA), classical harmony search (HS) and global-best harmony search (GHS) algorithms
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Flux compactifications, gauge algebras and De Sitter
Dibitetto, Giuseppe; Linares, Roman; Roest, Diederik
2010-01-01
The introduction of (non-)geometric fluxes allows for N = 1 moduli stabilisation in a De Sitter vacuum. The aim of this Letter is to assess to what extent this is true in N = 4 compactifications. First we identify the correct gauge algebra in terms of gauge and (non-)geometric fluxes. We then show
Neutron flux measurement by mobile detectors
International Nuclear Information System (INIS)
Verchain, M.
1987-01-01
Various incore instrumentation systems and their technological evolution are first reviewed. Then, for 1300 MWe PWR nuclear power plant, temperature and neutron flux measurement are described. Mobile fission chambers, with their large measuring range and accurate location allow a good knowledge of the core. Other incore measures are possible because of flux detector thimble tubes inserted in the reactor core [fr
EL-2 reactor: Thermal neutron flux distribution
International Nuclear Information System (INIS)
Rousseau, A.; Genthon, J.P.
1958-01-01
The flux distribution of thermal neutrons in EL-2 reactor is studied. The reactor core and lattices are described as well as the experimental reactor facilities, in particular, the experimental channels and special facilities. The measurement shows that the thermal neutron flux increases in the central channel when enriched uranium is used in place of natural uranium. However the thermal neutron flux is not perturbed in the other reactor channels by the fuel modification. The macroscopic flux distribution is measured according the radial positioning of fuel rods. The longitudinal neutron flux distribution in a fuel rod is also measured and shows no difference between enriched and natural uranium fuel rods. In addition, measurements of the flux distribution have been effectuated for rods containing other material as steel or aluminium. The neutron flux distribution is also studied in all the experimental channels as well as in the thermal column. The determination of the distribution of the thermal neutron flux in all experimental facilities, the thermal column and the fuel channels has been made with a heavy water level of 1825 mm and is given for an operating power of 1000 kW. (M.P.)
Increased heat fluxes near a forest edge
Klaassen, W; van Breugel, PB; Moors, EJ; Nieveen, JP
2002-01-01
Observations of sensible and latent heat flux above forest downwind of a forest edge show these fluxes to be larger than the available energy over the forest. The enhancement averages to 56 W m(-2), or 16% of the net radiation, at fetches less than 400 m, equivalent to fetch to height ratios less
Increased heat fluxes near a forest edge
Klaassen, W.; Breugel, van P.B.; Moors, E.J.; Nieveen, J.P.
2002-01-01
Observations of sensible and latent heat flux above forest downwind of a forest edge show these fluxes to be larger than the available energy over the forest. The enhancement averages to 56 W mm2, or 16 f the net radiation, at fetches less than 400 m, equivalent to fetch to height ratios less than
Initiation of CMEs by Magnetic Flux Emergence
Indian Academy of Sciences (India)
The initiation of solar Coronal Mass Ejections (CMEs) is studied in the framework of numerical magnetohydrodynamics (MHD). The initial CME model includes a magnetic flux rope in spherical, axisymmetric geometry. The initial configuration consists of a magnetic flux rope embedded in a gravitationally stratified solar ...
Heat flux viscosity in collisional magnetized plasmas
Energy Technology Data Exchange (ETDEWEB)
Liu, C., E-mail: cliu@pppl.gov [Princeton University, Princeton, New Jersey 08544 (United States); Fox, W. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Bhattacharjee, A. [Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-05-15
Momentum transport in collisional magnetized plasmas due to gradients in the heat flux, a “heat flux viscosity,” is demonstrated. Even though no net particle flux is associated with a heat flux, in a plasma there can still be momentum transport owing to the velocity dependence of the Coulomb collision frequency, analogous to the thermal force. This heat-flux viscosity may play an important role in numerous plasma environments, in particular, in strongly driven high-energy-density plasma, where strong heat flux can dominate over ordinary plasma flows. The heat flux viscosity can influence the dynamics of the magnetic field in plasmas through the generalized Ohm's law and may therefore play an important role as a dissipation mechanism allowing magnetic field line reconnection. The heat flux viscosity is calculated directly using the finite-difference method of Epperlein and Haines [Phys. Fluids 29, 1029 (1986)], which is shown to be more accurate than Braginskii's method [S. I. Braginskii, Rev. Plasma Phys. 1, 205 (1965)], and confirmed with one-dimensional collisional particle-in-cell simulations. The resulting transport coefficients are tabulated for ease of application.
Anthropogenic heat flux estimation from space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmond, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2016-01-01
H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts
ANthropogenic heat FLUX estimation from Space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmong, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mi, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2017-01-01
The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the
Optical magnetic flux generation in superconductor
Indian Academy of Sciences (India)
Abstract. The generation of the magnetic flux quanta inside the superconductors is studied as a new effect to destroy ... Ultrafast phenomena; femtosecond laser; optical magnetic flux generation. PACS Nos 85.25. .... [8] M Tonouchi, M Tani, Z Wang, K Sakai, S Tomozawa, M Hangyo, Y Murakami and S. Nakashima, Jpn. J.
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Studies on Design Automation and Arithmetic Circuit Design for Single-Flux-Quantum Digital Circuits
小畑, 幸嗣; Obata, Koji
2008-01-01
Superconductive single-flux-quantum (SFQ) circuit technology attracts attention as a nextgeneration technology of integrated circuits because of its ultra-fast computation speedand low power consumption. In SFQ digital circuits, unlike CMOS digital circuits, apulse is used as a carrier of information and the representation of the logic values isdifferent from that in CMOS digital circuits. Therefore, design automation algorithms andstructure of arithmetic circuits suitable for SFQ digital cir...
Energy Technology Data Exchange (ETDEWEB)
El-Kharashi, Eyhab Aly, E-mail: EyhabElkharahi@hotmail.com [Faculty of Engineering, Electrical Power and Machines Department, Ain Shams University, 1 El-Sarayat Street, Abdou Basha Square, Abbassia 11517, Cairo (Egypt)
2011-11-15
Highlights: {yields} The paper uses the multi-circular rotor in the switched reluctance motor to increase its output torque and its efficiency. {yields} Finite element is used to model the new SRM accurately. {yields} The Matlab/Simulink is used to dynamically model the new SRM. {yields} The paper compares the torque capability of the multi-circular rotor SRM. {yields} The new SRM produces approximately double the torque of its equivalent conventional SRM. - Abstract: The paper introduces a new type of electrical machines which has significantly high output torque. The toothed-rotor in the conventional electrical machine is replaced by a multi-circular rotor to increase the saliency and to shorten the flux loops consequently the output torque increases. The paper presents the design steps of this new type of electrical machine and also examines its performance. In addition, the paper compares the percentage increase in output torque from the proposed new electric machine to its equivalent conventional motor. Then the paper proceeds to discuss the relation between the switching on angle and the maximum speed, the torque ripples, and the efficiency.
Crystal growth of emerald by flux method
International Nuclear Information System (INIS)
Inoue, Mikio; Narita, Eiichi; Okabe, Taijiro; Morishita, Toshihiko.
1979-01-01
Emerald crystals have been formed in two binary fluxes of Li 2 O-MoO 2 and Li 2 O-V 2 O 5 using the slow cooling method and the temperature gradient method under various conditions. In the flux of Li 2 O-MoO 3 carried out in the range of 2 -- 5 of molar ratios (MoO 3 /Li 2 O), emerald was crystallized in the temperature range from 750 to 950 0 C, and the suitable crystallization conditions were found to be the molar ratio of 3 -- 4 and the temperature about 900 0 C. In the flux of Li 2 O-V 2 O 5 carried out in the range of 1.7 -- 5 of molar ratios (V 2 O 5 /Li 2 O), emerald was crystallized in the temperature range from 900 to 1150 0 . The suitable crystals were obtained at the molar ratio of 3 and the temperature range of 1000 -- 1100 0 C. The crystallization temperature rised with an increase in the molar ratio of the both fluxes. The emeralds grown in two binary fluxes were transparent green, having the density of 2.68, the refractive index of 1.56, and the two distinct bands in the visible spectrum at 430 and 600nm. The emerald grown in Li 2 O-V 2 O 5 flux was more bluish green than that grown in Li 2 O-MoO 3 flux. The size of the spontaneously nucleated emerald grown in the former flux was larger than the latter, when crystallized by the slow cooling method. As for the solubility of beryl in the two fluxes, Li 2 O-V 2 O 5 flux was superior to Li 2 O-MoO 3 flux whose small solubility of SiO 2 caused an experimental problem to the temperature gradient method. The suitability of the two fluxes for the crystal growth of emerald by the flux method was discussed from the view point of various properties of above-mentioned two fluxes. (author)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
E-core transverse flux machine with integrated fault detection system
DEFF Research Database (Denmark)
Rasmussen, Peter Omand; Runólfsson, Gunnar; Thorsdóttir, Thórunn Ágústa
2011-01-01
The E-core transverse flux machine, which is a variation of the classical Switched Reluctance machine (SRM), have all the basic properties to be considered as a very fault tolerant machine. Every single coil in the machine is isolated from the each others both magnetic, electrical and to some...... extent also thermal. Since the E-core transverse flux-machine belongs to the family of the SRMs it has unique properties of intervals without current in the windings. By careful investigation of the voltage and current in these intervals a very simple method to detect single and partial turn short...... circuit faults have been developed. For other types of machines the single and partial turn short circuit is very difficult to deal with and requires normally very comprehensive detection and calculation schemes. The developed detection algorithm combined with the E-core transverse flux machine...
Development of an Axial Flux MEMS BLDC Micromotor with Increased Efficiency and Power Density
Directory of Open Access Journals (Sweden)
Xiaofeng Ding
2015-06-01
Full Text Available This paper presents a rigorous design and optimization of an axial flux microelectromechanical systems (MEMS brushless dc (BLDC micromotor with dual rotor improving both efficiency and power density with an external diameter of only around 10 mm. The stator is made of two layers of windings by MEMS technology. The rotor is developed by film permanent magnets assembled over the rotor yoke. The characteristics of the MEMS micromotor are analyzed and modeled through a 3-D magnetic equivalent circuit (MEC taking the leakage flux and fringing effect into account. Such a model yields a relatively accurate prediction of the flux in the air gap, back electromotive force (EMF and electromagnetic torque, whilst being computationally efficient. Based on 3-D MEC model the multi-objective firefly algorithm (MOFA is developed for the optimal design of this special machine. Both 3-D finite element (FE simulation and experiments are employed to validate the MEC model and MOFA optimization design.
Computational Platform for Flux Analysis Using 13C-Label Tracing- Phase I SBIR Final Report
Energy Technology Data Exchange (ETDEWEB)
Van Dien, Stephen J.
2005-04-12
Isotopic label tracing is a powerful experimental technique that can be combined with metabolic models to quantify metabolic fluxes in an organism under a particular set of growth conditions. In this work we constructed a genome-scale metabolic model of Methylobacterium extorquens, a facultative methylotroph with potential application in the production of useful chemicals from methanol. A series of labeling experiments were performed using 13C-methanol, and the resulting distribution of labeled carbon in the proteinogenic amino acids was determined by mass spectrometry. Algorithms were developed to analyze this data in context of the metabolic model, yielding flux distributions for wild-type and several engineered strains of M. extorquens. These fluxes were compared to those predicted by model simulation alone, and also integrated with microarray data to give an improved understanding of the metabolic physiology of this organism.
Directory of Open Access Journals (Sweden)
S. Metzger
2013-04-01
Full Text Available The goal of this study is to characterize the sensible (H and latent (LE heat exchange for different land covers in the heterogeneous steppe landscape of the Xilin River catchment, Inner Mongolia, China. Eddy-covariance flux measurements at 50–100 m above ground were conducted in July 2009 using a weight-shift microlight aircraft. Wavelet decomposition of the turbulence data enables a spatial discretization of 90 m of the flux measurements. For a total of 8446 flux observations during 12 flights, MODIS land surface temperature (LST and enhanced vegetation index (EVI in each flux footprint are determined. Boosted regression trees are then used to infer an environmental response function (ERF between all flux observations (H, LE and biophysical (LST, EVI and meteorological drivers. Numerical tests show that ERF predictions covering the entire Xilin River catchment (≈3670 km2 are accurate to ≤18% (1 σ. The predictions are then summarized for each land cover type, providing individual estimates of source strength (36 W m−2 H −2, 46 W m−2 −2 and spatial variability (11 W m−2 H −2, 14 W m−2 LE −2 to a precision of ≤5%. Lastly, ERF predictions of land cover specific Bowen ratios are compared between subsequent flights at different locations in the Xilin River catchment. Agreement of the land cover specific Bowen ratios to within 12 ± 9% emphasizes the robustness of the presented approach. This study indicates the potential of ERFs for (i extending airborne flux measurements to the catchment scale, (ii assessing the spatial representativeness of long-term tower flux measurements, and (iii designing, constraining and evaluating flux algorithms for remote sensing and numerical modelling applications.
Heat Flux Distribution of Antarctica Unveiled
Martos, Yasmina M.; Catalán, Manuel; Jordan, Tom A.; Golynsky, Alexander; Golynsky, Dmitry; Eagles, Graeme; Vaughan, David G.
2017-11-01
Antarctica is the largest reservoir of ice on Earth. Understanding its ice sheet dynamics is crucial to unraveling past global climate change and making robust climatic and sea level predictions. Of the basic parameters that shape and control ice flow, the most poorly known is geothermal heat flux. Direct observations of heat flux are difficult to obtain in Antarctica, and until now continent-wide heat flux maps have only been derived from low-resolution satellite magnetic and seismological data. We present a high-resolution heat flux map and associated uncertainty derived from spectral analysis of the most advanced continental compilation of airborne magnetic data. Small-scale spatial variability and features consistent with known geology are better reproduced than in previous models, between 36% and 50%. Our high-resolution heat flux map and its uncertainty distribution provide an important new boundary condition to be used in studies on future subglacial hydrology, ice sheet dynamics, and sea level change.
Flux Modulation in the Electrodynamic Loudspeaker
DEFF Research Database (Denmark)
Halvorsen, Morten; Tinggaard, Carsten; Agerkvist, Finn T.
2015-01-01
This paper discusses the effect of flux modulation in the electrodynamic loudspeaker with main focus on the effect on the force factor. A measurement setup to measure the AC flux modulation with static voice coil is explained and the measurements shows good consistency with FEA simulations....... Measurements of the generated AC flux modulation shows, that eddy currents are the main source to magnetic losses in form of phase lag and amplitude changes. Use of a copper cap shows a decrease in flux modulation amplitude at the expense of increased power losses. Finally, simulations show...... that there is a high dependency between the generated AC flux modulation from the voice coil and the AC force factor change....
Spacecraft-produced neutron fluxes on Skylab
Quist, T. C.; Furst, M.; Burnett, D. S.; Baum, J. H.; Peacock, C. L., Jr.; Perry, D. G.
1977-01-01
Estimates of neutron fluxes in different energy ranges are reported for the Skylab spacecraft. Detectors composed of uranium, thorium, and bismuth foils with mica as a fission track recorder, as well as boron foils with cellulose acetate as an alpha-particle recorder, were deployed at different positions in the Orbital Workshop. It was found that the Skylab neutron flux was dominated by high energy (greater than 1 MeV) contributions and that there was no significant time variation in the fluxes. Firm upper limits of 7-15 neutrons/sq cm-sec, depending on the detector location in the spacecraft, were established for fluxes above 1 MeV. Below 1 MeV, the neutron fluxes were about an order of magnitude lower. The neutrons are interpreted as originating from the interactions of leakage protons from the radiation belt with the spacecraft.
Research of the Border Mobility Influence on the Half-Space Temperature Field Under Heat Flux
Directory of Open Access Journals (Sweden)
P. A. Vlasov
2014-01-01
Full Text Available Among the problems of unsteady heat conduction, tasks that can be solved in analytical closedform hold a special place. This species can be used both for parametric optimization of thermal protection of structures and for testing of computational algorithms.The previous paper presented an analytical solution of the problem to find the half-space temperature field with the uniformly moving boundary, which was under the external heat flux of constant power. In this paper we consider a similar problem, but the law of the moving boundary is assumed to be arbitrary nondecreasing, and the power of the heat flux can vary over time.An analytical dependence of the problem solution on the temperature of a moving boundary was obtained by using the Fourier transformation in the spatial variable. To determine the temperature of moving boundary, Volterra integral equation of the second kind was drawn. The solution of this equation was numerically conducted using a specially developed computational algorithm.The obtained representation was used to research the most characteristic features of the process to form the temperature field in studied area when implementing the various laws of boundaries motion and different operating conditions for the external heat flux influence. Using computational experiments allowed us to find that the asymptotic nature of this dependence confirms the results obtained in previous work. It has been established that the nonlinear character of both the boundary motion law and the external heat flux power variation law mainly affect the specifics of the transition process.
An Overview of the Naval Research Laboratory Ocean Surface Flux (NFLUX) System
May, J. C.; Rowley, C. D.; Barron, C. N.
2016-02-01
The Naval Research Laboratory (NRL) ocean surface flux (NFLUX) system is an end-to-end data processing and assimilation system used to provide near-real time satellite-based surface heat flux fields over the global ocean. Swath-level air temperature (TA), specific humidity (QA), and wind speed (WS) estimates are produced using multiple polynomial regression algorithms with inputs from satellite sensor data records from the Special Sensor Microwave Imager/Sounder, the Advanced Microwave Sounding Unit-A, the Advanced Technology Microwave Sounder, and the Advanced Microwave Scanning Radiometer-2 sensors. Swath-level WS estimates are also retrieved from satellite environmental data records from WindSat, the MetOp scatterometers, and the Oceansat scatterometer. Swath-level solar and longwave radiative flux estimates are produced utilizing the Rapid Radiative Transfer Model for Global Circulation Models (RRTMG). Primary inputs to the RRTMG include temperature and moisture profiles and cloud liquid and ice water paths from the Microwave Integrated Retrieval System. All swath-level satellite estimates undergo an automated quality control process and are then assimilated with atmospheric model forecasts to produce 3-hourly gridded analysis fields. The turbulent heat flux fields, latent and sensible heat flux, are determined from the Coupled Ocean-Atmosphere Response Experiment (COARE) 3.0 bulk algorithms using inputs of TA, QA, WS, and a sea surface temperature model field. Quality-controlled in situ observations over a one-year time period from May 2013 through April 2014 form the reference for validating ocean surface state parameter and heat flux fields. The NFLUX fields are evaluated alongside the Navy's operational global atmospheric model, the Navy Global Environmental Model (NAVGEM). NFLUX is shown to have smaller biases and lower or similar root mean square errors compared to NAVGEM.
An efficient algorithm for 3D space time kinetics simulations for large PHWRs
International Nuclear Information System (INIS)
Jain, Ishi; Fernando, M.P.S.; Kumar, A.N.
2012-01-01
In nuclear reactor physics and allied areas like shielding, various forms of neutron transport equation or its approximation namely the diffusion equation have to be solved to estimate neutron flux distribution. This paper presents an efficient algorithm yielding accurate results along with promising gain in computational work. (author)
Evaluation and intercomparison of the five major dry deposition algorithms in North America
To quantify differences between dry deposition algorithms commonly used in North America, five models were selected to calculate dry deposition velocity (Vd) for O3 and SO2 over a temperate mixed forest in southern Ontario, Canada where a five-year flux database had previously be...
Iterative schemes for parallel Sn algorithms in a shared-memory computing environment
International Nuclear Information System (INIS)
Haghighat, A.; Hunter, M.A.; Mattis, R.E.
1995-01-01
Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency
Applications of algorithmic differentiation to phase retrieval algorithms.
Jurling, Alden S; Fienup, James R
2014-07-01
In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
FluxFix: automatic isotopologue normalization for metabolic tracer analysis.
Trefely, Sophie; Ashwell, Peter; Snyder, Nathaniel W
2016-11-25
Isotopic tracer analysis by mass spectrometry is a core technique for the study of metabolism. Isotopically labeled atoms from substrates, such as [ 13 C]-labeled glucose, can be traced by their incorporation over time into specific metabolic products. Mass spectrometry is often used for the detection and differentiation of the isotopologues of each metabolite of interest. For meaningful interpretation, mass spectrometry data from metabolic tracer experiments must be corrected to account for the naturally occurring isotopologue distribution. The calculations required for this correction are time consuming and error prone and existing programs are often platform specific, non-intuitive, commercially licensed and/or limited in accuracy by using theoretical isotopologue distributions, which are prone to artifacts from noise or unresolved interfering signals. Here we present FluxFix ( http://fluxfix.science ), an application freely available on the internet that quickly and reliably transforms signal intensity values into percent mole enrichment for each isotopologue measured. 'Unlabeled' data, representing the measured natural isotopologue distribution for a chosen analyte, is entered by the user. This data is used to generate a correction matrix according to a well-established algorithm. The correction matrix is applied to labeled data, also entered by the user, thus generating the corrected output data. FluxFix is compatible with direct copy and paste from spreadsheet applications including Excel (Microsoft) and Google sheets and automatically adjusts to account for input data dimensions. The program is simple, easy to use, agnostic to the mass spectrometry platform, generalizable to known or unknown metabolites, and can take input data from either a theoretical natural isotopologue distribution or an experimentally measured one. Our freely available web-based calculator, FluxFix ( http://fluxfix.science ), quickly and reliably corrects metabolic tracer data for
PWR loading pattern optimization using Harmony Search algorithm
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Numerical results reveal that the HS method is reliable. ► The great advantage of HS is significant gain in computational cost. ► On the average, the final band width of search fitness values is narrow. ► Our experiments show that the search approaches the optimal value fast. - Abstract: In this paper a core reloading technique using Harmony Search, HS, is presented in the context of finding an optimal configuration of fuel assemblies, FA, in pressurized water reactors. To implement and evaluate the proposed technique a Harmony Search along Nodal Expansion Code for 2-D geometry, HSNEC2D, is developed to obtain nearly optimal arrangement of fuel assemblies in PWR cores. This code consists of two sections including Harmony Search algorithm and Nodal Expansion modules using fourth degree flux expansion which solves two dimensional-multi group diffusion equations with one node per fuel assembly. Two optimization test problems are investigated to demonstrate the HS algorithm capability in converging to near optimal loading pattern in the fuel management field and other subjects. Results, convergence rate and reliability of the method are quite promising and show the HS algorithm performs very well and is comparable to other competitive algorithms such as Genetic Algorithm and Particle Swarm Intelligence. Furthermore, implementation of nodal expansion technique along HS causes considerable reduction of computational time to process and analysis optimization in the core fuel management problems
Algorithms and their others: Algorithmic culture in context
Directory of Open Access Journals (Sweden)
Paul Dourish
2016-08-01
Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.
International Nuclear Information System (INIS)
Park, Tongkyu; Yang, Won Sik; Kim, Sang-Ji
2017-01-01
Highlights: • An enhanced search algorithm for charged fuel enrichment was developed for equilibrium cycle analysis with REBUS-3. • The new search algorithm is not sensitive to the user-specified initial guesses. • The new algorithm reduces the computational time by a factor of 2–3. - Abstract: This paper presents an enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3. The current enrichment search algorithm of REBUS-3 takes a large number of iterations to yield a converged solution or even terminates without a converged solution when the user-specified initial guesses are far from the solution. To resolve the convergence problem and to reduce the computational time, an enhanced search algorithm was developed. The enhanced algorithm is based on the idea of minimizing the number of enrichment estimates by allowing drastic enrichment changes and by optimizing the current search algorithm of REBUS-3. Three equilibrium cycle problems with recycling, without recycling and of high discharge burnup were defined and a series of sensitivity analyses were performed with a wide range of user-specified initial guesses. Test results showed that the enhanced search algorithm is able to produce a converged solution regardless of the initial guesses. In addition, it was able to reduce the number of flux calculations by a factor of 2.9, 1.8, and 1.7 for equilibrium cycle problems with recycling, without recycling, and of high discharge burnup, respectively, compared to the current search algorithm.
Fighting Censorship with Algorithms
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
Neutron flux enhancement in the NRAD reactor
International Nuclear Information System (INIS)
Weeks, A.A.; Heidel, C.C.; Imel, G.R.
1988-01-01
In 1987 a series of experiments were conducted at the NRAD reactor facility at Argonne National Laboratory - West (ANL-W) to investigate the possibility of increasing the thermal neutron content at the end of the reactor's east beam tube through the use of hydrogenous flux traps. It was desired to increase the thermal flux for a series of experiments to be performed in the east radiography cell, in which the enhanced flux was required in a relatively small volume. Hence, it was feasible to attempt to focus the cross section of the beam to a smaller area. Two flux traps were constructed from unborated polypropylene and tested to determine their effectiveness. Both traps were open to the entire cross-sectional area of the neutron beam (as it emerges from the wall and enters the beam room). The sides then converged such that at the end of the trap the beam would be 'focused' to a greater intensity. The differences in the two flux traps were primarily in length, and hence angle to the beam as the inlet and outlet cross-sectional areas were held constant. The experiments have contributed to the design of a flux trap in which a thermal flux of nearly 10 9 was obtained, with an enhancement of 6.61
CO2 flux geothermometer for geothermal exploration
Harvey, M. C.; Rowland, J. V.; Chiodini, G.; Rissmann, C. F.; Bloomberg, S.; Fridriksson, T.; Oladottir, A. A.
2017-09-01
A new geothermometer (TCO2 Flux) is proposed based on soil diffuse CO2 flux and shallow temperature measurements made on areas of steam heated, thermally altered ground above active geothermal systems. This CO2 flux geothermometer is based on a previously reported CO2 geothermometer that was designed for use with fumarole analysis. The new geothermometer provides a valuable additional exploration tool for estimating subsurface temperatures in high-temperature geothermal systems. Mean TCO2 Flux estimates fall within the range of deep drill hole temperatures at Wairakei (New Zealand), Tauhara (New Zealand), Rotokawa (New Zealand), Ohaaki (New Zealand), Reykjanes (Iceland) and Copahue (Argentina). The spatial distribution of geothermometry estimates is consistent with the location of major upflow zones previously reported at the Wairakei and Rotokawa geothermal systems. TCO2 Flux was also evaluated at White Island (New Zealand) and Reporoa (New Zealand), where limited sub-surface data exists. Mode TCO2 Flux at White Island is high (320 °C), the highest of the systems considered in this study. However, the geothermometer relies on mineral-water equilibrium in neutral pH reservoir fluids, and would not be reliable in such an active and acidic environment. Mean TCO2 Flux at Reporoa (310 °C) is high, which indicates Reporoa has a separate upflow from the nearby Waiotapu geothermal system; an outflow from Waiotapu would not be expected to have such high temperature.
Spectral correction algorithm for multispectral CdTe x-ray detectors
Christensen, Erik D.; Kehres, Jan; Gu, Yun; Feidenhans'l, Robert; Olsen, Ulrik L.
2017-09-01
Compared to the dual energy scintillator detectors widely used today, pixelated multispectral X-ray detectors show the potential to improve material identification in various radiography and tomography applications used for industrial and security purposes. However, detector effects, such as charge sharing and photon pileup, distort the measured spectra in high flux pixelated multispectral detectors. These effects significantly reduce the detectors' capabilities to be used for material identification, which requires accurate spectral measurements. We have developed a semi analytical computational algorithm for multispectral CdTe X-ray detectors which corrects the measured spectra for severe spectral distortions caused by the detector. The algorithm is developed for the Multix ME100 CdTe X-ray detector, but could potentially be adapted for any pixelated multispectral CdTe detector. The calibration of the algorithm is based on simple attenuation measurements of commercially available materials using standard laboratory sources, making the algorithm applicable in any X-ray setup. The validation of the algorithm has been done using experimental data acquired with both standard lab equipment and synchrotron radiation. The experiments show that the algorithm is fast, reliable even at X-ray flux up to 5 Mph/s/mm2, and greatly improves the accuracy of the measured X-ray spectra, making the algorithm very useful for both security and industrial applications where multispectral detectors are used.
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Eddy Correlation Flux Measurement System (ECOR) Handbook
Energy Technology Data Exchange (ETDEWEB)
Cook, DR
2011-01-31
The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration.
Magnetic refrigeration using flux compression in superconductors
International Nuclear Information System (INIS)
Israelsson, U.E.; Strayer, D.M.; Jackson, H.W.; Petrac, D.
1990-01-01
The feasibility of using flux compression in high-temperature superconductors to produce the large time-varying magnetic fields required in a field cycled magnetic refrigerator operating between 20 K and 4 K is presently investigated. This paper describes the refrigerator concept and lists limitations and advantages in comparison with conventional refrigeration techniques. The maximum fields obtainable by flux compression in high-temperature superconductor materials, as presently prepared, are too low to serve in such a refrigerator. However, reports exist of critical current values that are near usable levels for flux pumps in refrigerator applications. 9 refs
Genetic Algorithms in Noisy Environments
THEN, T. W.; CHONG, EDWIN K. P.
1993-01-01
Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Fuzzy HRRN CPU Scheduling Algorithm
Bashir Alam; R. Biswas; M. Alam
2011-01-01
There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Nuclear reactors project optimization based on neural network and genetic algorithm
International Nuclear Information System (INIS)
Pereira, Claudio M.N.A.; Schirru, Roberto; Martinez, Aquilino S.
1997-01-01
This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs
International Nuclear Information System (INIS)
Wei, C.Q.; Lee, L.C.; Wang, S.; Akasofu, S.I.
1991-01-01
Spacecraft observations suggest that flux transfer events and interplanetary magnetic clouds may be associated with magnetic flux ropes which are magnetic flux tubes containing helical magnetic field lines. In the magnetic flux ropes, the azimuthal magnetic field (B θ ) is superposed on the axial field (B z ). In this paper the time evolution of a localized magnetic flux rope is studied. A two-dimensional compressible magnetohydrodynamic simulation code with a cylindrical symmetry is developed to study the wave modes associated with the evolution of flux ropes. It is found that in the initial phase both the fast magnetosonic wave and the Alfven wave are developed in the flux rope. After this initial phase, the Alfven wave becomes the dominant wave mode for the evolution of the magnetic flux rope and the radial expansion velocity of the flux rope is found to be negligible. Numerical results further show that even for a large initial azimuthal component of the magnetic field (B θ ≅ 1-4 B z ) the propagation velocity along the axial direction of the flux rope remains to be the Alfven velocity. Diagnoses show that after the initial phase the transverse kinetic energy equals the transverse magnetic energy, which is characteristic of the Alfven mode. It is also found that the localized magnetic flux rope tends to evolve into two separate magnetic ropes propagating in opposite directions. The simulation results are used to study the evolution of magnetic flux ropes associated with flux transfer events observed at the Earth's dayside magnetopause and magnetic clouds in the interplanetary space
On the design of general-purpose flux limiters for finite element schemes. I. Scalar convection
Kuzmin, D.
2006-12-01
The algebraic flux correction (AFC) paradigm is extended to finite element discretizations with a consistent mass matrix. It is shown how to render an implicit Galerkin scheme positivity-preserving and remove excessive artificial diffusion in regions where the solution is sufficiently smooth. To this end, the original discrete operators are modified in a mass-conserving fashion so as to enforce the algebraic constraints to be satisfied by the numerical solution. A node-oriented limiting strategy is employed to control the raw antidiffusive fluxes which consist of a convective part and a contribution of the consistent mass matrix. The former offsets the artificial diffusion due to 'upwinding' of the spatial differential operator and lends itself to an upwind-biased flux limiting. The latter eliminates the error induced by mass lumping and calls for the use of a symmetric flux limiter. The concept of a target flux and a new definition of upper/lower bounds make it possible to combine the advantages of algebraic FCT and TVD schemes introduced previously by the author and his coworkers. Unlike other high-resolution schemes for unstructured meshes, the new algorithm reduces to a consistent (high-order) Galerkin scheme in smooth regions and is designed to provide an optimal treatment of both stationary and time-dependent problems. Its performance is illustrated by application to the linear advection equation for a number of 1D and 2D configurations.
Accuracy of surface heat fluxes from observations of operational satellites
Digital Repository Service at National Institute of Oceanography (India)
Pankajakshan, T.; Sugimori, Y.
Uncertainties in the flux estimates, resulting from the use of bulk method and remotely sensed data are worked out and are presented for individual and total fluxes. These uncertainties in satellite derived fluxes are further compared...
Tetrakis-amido high flux membranes
McCray, S.B.
1989-10-24
Composite RO membranes of a microporous polymeric support and a polyamide reaction product of a tetrakis-aminomethyl compound and a polyacylhalide are disclosed, said membranes exhibiting high flux and good chlorine resistance.
400 Area/Fast Flux Test Facility
Federal Laboratory Consortium — The 400 Area at Hanford is home primarily to the Fast Flux Test Facility (FFTF), a DOE-owned, formerly operating, 400-megawatt (thermal) liquid-metal (sodium)-cooled...
Pulse power applications of flux compression generators
International Nuclear Information System (INIS)
Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.
1981-01-01
Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources
Rotating flux compressor for energy conversion
International Nuclear Information System (INIS)
Chowdhuri, P.; Linton, T.W.; Phillips, J.A.
1983-01-01
The rotating flux compressor (RFC) converts rotational kinetic energy into an electrical output pulse which would have higher energy than the electrical energy initially stored in the compressor. An RFC has been designed in which wedge-shaped rotor blades pass through the air gaps between successive turns of a solenoid, the stator. Magnetic flux is generated by pulsing the stator solenoids when the inductance is a maximum, i.e., when the flux fills the stator-solenoid volume. Connecting the solenoid across a load conserves the flux which is compressed within the small volume surrounding the stator periphery when the rotor blades cut into the free space between the stator plates, creating a minimum-inductance condition. The unique features of this design are: (1) no electrical connections (brushes) to the rotor; (2) no conventional windings; and (3) no maintenance. The device has been tested up to 5000 rpm of rotor speed
Modelling drug flux through microporated skin.
Rzhevskiy, Alexey S; Guy, Richard H; Anissimov, Yuri G
2016-11-10
A simple mathematical equation has been developed to predict drug flux through microporated skin. The theoretical model is based on an approach applied previously to water evaporation through leaf stomata. Pore density, pore radius and drug molecular weight are key model parameters. The predictions of the model were compared with results derived from a simple, intuitive method using porated area alone to estimate the flux enhancement. It is shown that the new approach predicts significantly higher fluxes than the intuitive analysis, with transport being proportional to the total pore perimeter rather than area as intuitively anticipated. Predicted fluxes were in good general agreement with experimental data on drug delivery from the literature, and were quantitatively closer to the measured values than those derived from the intuitive, area-based approach. Copyright © 2016 Elsevier B.V. All rights reserved.
Flux Tube Dynamics in the Dual Superconductor
International Nuclear Information System (INIS)
Lampert, M.; Svetitsky, B.
1999-01-01
We have studied plasma oscillations in a flux tube created in a dual superconductor. The theory contains an Abelian gauge field coupled magnetically to a Higgs field that confines electric charge via the dual Meissner effect. Starting from a static flux tube configuration, with electric charges at either end, we release a fluid of electric charges in the system that accelerate and screen the electric field. The weakening of the electric field allows the flux tube to collapse, and the inertia of the charges forces it open again. We investigate both Type I and Type II superconductors, with plasma frequencies both above and below the threshold for radiation into the Higgs vacuum. (The parameters appropriate to QCD are in the Type II regime; the plasma frequency depends on the mass taken for the fluid constituents.) The coupling of the plasma oscillations to the Higgs field making up the flux tube is the main new feature in our work
DEFF Research Database (Denmark)
Sogachev, Andrey; Leclerc, Monique Y.; Zhang, Gensheng
2008-01-01
the concentration and flux fields against those of a uniform forested surface. We use an atmospheric boundary layer two-equation closure model that accounts for the flow dynamics and vertical divergence of CO2 sources/sinks within a plant canopy. This paper characterizes the spatial variation of CO2 fluxes...... as a function of both sources/sinks distribution and the vertical structure of the canopy. Results suggest that the ground source plays a major role in the formation of wave-like vertical CO2 flux behavior downwind of a forest edge, despite the fact that the contribution of foliage sources/sinks changes...... monotonously. Such a variation is caused by scalar advection in the trunk space and reveals itself as a decrease or increase in vertical fluxes over the forest relative to carbon dioxide exchange of the underlying forest. The effect was more pronounced in model forests where the leaf area is concentrated...
Direct ecosystem fluxes of volatile organic compounds from oil palms in South-East Asia
Directory of Open Access Journals (Sweden)
P. K. Misztal
2011-09-01
Full Text Available This paper reports the first direct eddy covariance fluxes of reactive biogenic volatile organic compounds (BVOCs from oil palms to the atmosphere using proton-transfer-reaction mass spectrometry (PTR-MS, measured at a plantation in Malaysian Borneo. At midday, net isoprene flux constituted the largest fraction (84 % of all emitted BVOCs measured, at up to 30 mg m^{−2} h^{−1} over 12 days. By contrast, the sum of its oxidation products methyl vinyl ketone (MVK and methacrolein (MACR exhibited clear deposition of 1 mg m^{−2} h^{−1}, with a small average canopy resistance of 230 s m^{−1}. Approximately 15 % of the resolved BVOC flux from oil palm trees could be attributed to floral emissions, which are thought to be the largest reported biogenic source of estragole and possibly also toluene. Although on average the midday volume mixing ratio of estragole exceeded that of toluene by almost a factor of two, the corresponding fluxes of these two compounds were nearly the same, amounting to 0.81 and 0.76 mg m^{−2} h^{−1}, respectively. By fitting the canopy temperature and PAR response of the MEGAN emissions algorithm for isoprene and other emitted BVOCs a basal emission rate of isoprene of 7.8 mg m^{−2} h^{−1} was derived. We parameterise fluxes of depositing compounds using a resistance approach using direct canopy measurements of deposition. Consistent with Karl et al. (2010, we also propose that it is important to include deposition in flux models, especially for secondary oxidation products, in order to improve flux predictions.
Magnetic flux generator for balanced membrane loudspeaker
DEFF Research Database (Denmark)
Rehder, Jörg; Rombach, Pirmin; Hansen, Ole
2002-01-01
This paper reports the development of a magnetic flux generator with an application in a hearing aid loudspeaker produced in microsystem technology (MST). The technology plans for two different designs for the magnetic flux generator utilizing a softmagnetic substrate or electroplated NiCoFe as c......CoFe as core material are presented and the production and characterization of four different mono- and double-layer planar coil types are reported....
Determination flux in the Reactor JEN-1
International Nuclear Information System (INIS)
Manas Diaz, L.; Montes Ponce de leon, J.
1960-01-01
This report summarized several irradiations that have been made to determine the neutron flux distributions in the core of the JEN-1 reactor. Gold foils of 380 μ gr and Mn-Ni (12% de Ni) of 30 mg have been employed. the epithermal flux has been determined by mean of the Cd radio. The resonance integral values given by Macklin and Pomerance have been used. (Author) 9 refs
International Nuclear Information System (INIS)
Cashwell, E.D.; Schrandt, R.G.
1980-01-01
The current state of the art of calculating flux at a point with MCNP is discussed. Various techniques are touched upon, but the main emphasis is on the fast improved version of the once-more-collided flux estimator, which has been modified to treat neutrons thermalized by the free gas model. The method is tested on several problems on interest and the results are presented
Controlling fluxes for microbial metabolic engineering
Sachdeva, Gairik
2014-01-01
This thesis presents novel synthetic biology tools and design principles usable for microbial metabolic engineering. Controlling metabolic fluxes is essential for biological manufacturing of fuels, materials, and high value chemicals. Insulating the flow of metabolites is a successful natural strategy for metabolic flux regulation. Recently, approaches using scaffolds, both in vitro and in vivo, to spatially co-localize enzymes have reported significant gains in product yields. RNA is suitabl...
Mold Flux Crystallization and Mold Thermal Behavior
Peterson, Elizabeth Irene
Mold flux plays a small but critical role in the continuous casting of steel. The carbon-coated powder is added at the top of the water-cooled copper mold, over time it melts and infiltrates the gap between the copper mold and the solidifying steel strand. Mold powders serve five primary functions: (1) chemical insulation, (2) thermal insulation, (3) lubrication between the steel strand and mold, (4) absorption of inclusions, and (5) promotion of even heat flux. All five functions are critical to slab casting, but surface defect prevention is primarily controlled through even heat flux. Glassy fluxes have high heat transfer and result in a thicker steel shell. Steels with large volumetric shrinkage on cooling must have a crystalline flux to reduce the radiative heat transfer and avoid the formation of cracks in the shell. Crystallinity plays a critical role in steel shell formation, therefore it is important to study the thermal conditions that promote each phase and its morphology. Laboratory tests were performed to generate continuous cooling transformation (CCT) and time-temperature-transformation (TTT) diagrams. Continuous cooling transformation tests were performed in an instrumented eight cell step chill mold. Results showed that cuspidine was the only phase formed in conventional fluxes and all observed structures were dendritic. An isothermal tin bath quench method was also developed to isothermally age glassy samples. Isothermal tests yielded different microstructures and different phases than those observed by continuous cooling. Comparison of aged tests with industrial flux films indicates similar faceted structures along the mold wall, suggesting that mold flux first solidifies as a glass along the mold wall, but the elevated temperature devitrifies the glassy structure forming crystals that cannot form by continuous cooling.
Backtrack Orbit Search Algorithm
Knowles, K.; Swick, R.
2002-12-01
A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.
Diagnostic algorithm for syncope.
Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon
2014-09-01
Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.
Toward an Algorithmic Pedagogy
Directory of Open Access Journals (Sweden)
Holly Willis
2007-01-01
Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?
Bidirectional solar wind electron heat flux events
International Nuclear Information System (INIS)
Gosling, J.T.; Baker, D.N.; Bame, S.J.; Feldman, W.C.; Zwickl, R.D.; Smith, E.J.
1987-01-01
Normally the approx. >80-eV electrons which carry the solar wind electron heat flux are collimated along the interplanetary magnetic field (IMF) in the direction pointing outward away from the sun. Occasionally, however, collimated fluxes of approx. >80-eV electrons are observed traveling both parallel and antiparallel to the IMF. Here we present the results of a survey of such bidirectional electron heat flux events as observed with the plasma and magnetic field experiments aboard ISEE 3 at times when the spacecraft was not magnetically connected to the earth's bow shock. The onset of a bidirectional electron heat flux at ISEE 3 usually signals spacecraft entry into a distinct solar wind plasma and field entity, most often characterized by anomalously low proton and electron temperatures, a strong, smoothly varying magnetic field, a low plasma beta, and a high total pressure. Significant field rotations often occur at the beginning and/or end of bidirectional heat flux events, and, at times, the large field rotations characteristic of ''magnetic clouds'' are present. Approximately half of all bidirectional heat flux events are associated with and follow interplanetary shocks, while the other events have no obvious shock associations
Monthly Sea Surface Salinity and Freshwater Flux Monitoring
Ren, L.; Xie, P.; Wu, S.
2017-12-01
Taking advantages of the complementary nature of the Sea Surface Salinity (SSS) measurements from the in-situ (CTDs, shipboard, Argo floats, etc.) and satellite retrievals from Soil Moisture Ocean Salinity (SMOS) satellite of the European Space Agency (ESA), the Aquarius of a joint venture between US and Argentina, and the Soil Moisture Active Passive (SMAP) of national Aeronautics and Space Administration (NASA), a technique is developed at NOAA/NCEP/CPC to construct an analysis of monthly SSS, called the NOAA Blended Analysis of Sea-Surface Salinity (BASS). The algorithm is a two-steps approach, i.e. to remove the bias in the satellite data through Probability Density Function (PDF) matching against co-located in situ measurements; and then to combine the bias-corrected satellite data with the in situ measurements through the Optimal Interpolation (OI) method. The BASS SSS product is on a 1° by 1° grid over the global ocean for a 7-year period from 2010. Combined with the NOAA/NCEP/CPC CMORPH satellite precipitation (P) estimates and the Climate Forecast System Reanalysis (CFSR) evaporation (E) fields, a suite of monthly package of the SSS and oceanic freshwater flux (E and P) was developed to monitor the global oceanic water cycle and SSS on a monthly basis. The SSS in BASS product is a suite of long-term SSS and fresh water flux data sets with temporal homogeneity and inter-component consistency better suited for the examination of the long-term changes and monitoring. It presents complete spatial coverage and improved resolution and accuracy, which facilitates the diagnostic analysis of the relationship and co-variability among SSS, freshwater flux, mixed layer processes, oceanic circulation, and assimilation of SSS into global models. At the AGU meeting, we will provide more details on the CPC salinity and fresh water flux data package and its applications in the monitoring and analysis of SSS variations in association with the ENSO and other major climate
Large estragole fluxes from oil palms in Borneo
Directory of Open Access Journals (Sweden)
P. K. Misztal
2010-05-01
Full Text Available During two field campaigns (OP3 and ACES, which ran in Borneo in 2008, we measured large emissions of estragole (methyl chavicol; IUPAC systematic name 1-allyl-4-methoxybenzene; CAS number 140-67-0 in ambient air above oil palm canopies (0.81 mg m^{−2} h^{−1} and 3.2 ppbv for mean midday fluxes and mixing ratios respectively and subsequently from flower enclosures. However, we did not detect this compound at a nearby rainforest. Estragole is a known attractant of the African oil palm weevil (Elaeidobius kamerunicus, which pollinates oil palms (Elaeis guineensis. There has been recent interest in the biogenic emissions of estragole but it is normally not included in atmospheric models of biogenic emissions and atmospheric chemistry despite its relatively high potential for secondary organic aerosol formation from photooxidation and high reactivity with OH radical. We report the first direct canopy-scale measurements of estragole fluxes from tropical oil palms by the virtual disjunct eddy covariance technique and compare them with previously reported data for estragole emissions from Ponderosa pine. Flowers, rather than leaves, appear to be the main source of estragole from oil palms; we derive a global estimate of estragole emissions from oil palm plantations of ~0.5 Tg y^{−1}. The observed ecosystem mean fluxes (0.44 mg m^{−2} h^{−1} and mean ambient volume mixing ratios (3.0 ppbv of estragole are the highest reported so far. The value for midday mixing ratios is not much different from the total average as, unlike other VOCs (e.g. isoprene, the main peak occurred in the evening rather than in the middle of the day. Despite this, we show that the estragole flux can be parameterised using a modified G06 algorithm for emission. However, the model underestimates the afternoon peak even though a similar approach works well for isoprene. Our measurements suggest that this biogenic
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...
Echo Cancellation I: Algorithms Simulation
Directory of Open Access Journals (Sweden)
P. Sovka
2000-04-01
Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
International Nuclear Information System (INIS)
Grady, M.
1986-01-01
I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs
Global alignment algorithms implementations | Fatumo ...
African Journals Online (AJOL)
In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.
Recovery Rate of Clustering Algorithms
Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S
2009-01-01
This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...
Quantum algorithms and learning theory
Arunachalam, S.
2018-01-01
This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
On exact algorithms for treewidth
Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.
2006-01-01
We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
Turbulent fluxes by "Conditional Eddy Sampling"
Siebicke, Lukas
2015-04-01
Turbulent flux measurements are key to understanding ecosystem scale energy and matter exchange, including atmospheric trace gases. While the eddy covariance approach has evolved as an invaluable tool to quantify fluxes of e.g. CO2 and H2O continuously, it is limited to very few atmospheric constituents for which sufficiently fast analyzers exist. High instrument cost, lack of field-readiness or high power consumption (e.g. many recent laser-based systems requiring strong vacuum) further impair application to other tracers. Alternative micrometeorological approaches such as conditional sampling might overcome major limitations. Although the idea of eddy accumulation has already been proposed by Desjardin in 1972 (Desjardin, 1977), at the time it could not be realized for trace gases. Major simplifications by Businger and Oncley (1990) lead to it's widespread application as 'Relaxed Eddy Accumulation' (REA). However, those simplifications (flux gradient similarity with constant flow rate sampling irrespective of vertical wind velocity and introduction of a deadband around zero vertical wind velocity) have degraded eddy accumulation to an indirect method, introducing issues of scalar similarity and often lack of suitable scalar flux proxies. Here we present a real implementation of a true eddy accumulation system according to the original concept. Key to our approach, which we call 'Conditional Eddy Sampling' (CES), is the mathematical formulation of conditional sampling in it's true form of a direct eddy flux measurement paired with a performant real implementation. Dedicated hardware controlled by near-real-time software allows full signal recovery at 10 or 20 Hz, very fast valve switching, instant vertical wind velocity proportional flow rate control, virtually no deadband and adaptive power management. Demonstrated system performance often exceeds requirements for flux measurements by orders of magnitude. The system's exceptionally low power consumption is ideal
Accuracy, convergence and stability of finite element CFD algorithms
International Nuclear Information System (INIS)
Baker, A.J.; Iannelli, G.S.; Noronha, W.P.
1989-01-01
The requirement for artificial dissipation is well understood for shock-capturing CFD procedures in aerodynamics. However, numerical diffusion is widely utilized across the board in Navier-Stokes CFD algorithms, ranging from incompressible through supersonic flow applications. The Taylor weak statement (TWS) theory is applicable to any conservation law system containing an evolutionary component, wherein the analytical modifications becomes functionally dependent on the Jacobian of the corresponding equation system flux vector. The TWS algorithm is developed for a range of fluid mechanics conservation law systems including incompressible Navier-Stokes, depth-averaged free surface hydrodynamic Navier-Stokes, and the compressible Euler and Navier-Stokes equations. This paper presents the TWS statement for the problem class range and highlights the important theoretical issues of accuracy, convergence and stability. Numerical results for a variety of benchmark problems are presented to document key features. 8 refs
Novel medical image enhancement algorithms
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
A finite element calculation of flux pumping
Campbell, A. M.
2017-12-01
A flux pump is not only a fascinating example of the power of Faraday’s concept of flux lines, but also an attractive way of powering superconducting magnets without large electronic power supplies. However it is not possible to do this in HTS by driving a part of the superconductor normal, it must be done by exceeding the local critical density. The picture of a magnet pulling flux lines through the material is attractive, but as there is no direct contact between flux lines in the magnet and vortices, unless the gap between them is comparable to the coherence length, the process must be explicable in terms of classical electromagnetism and a nonlinear V-I characteristic. In this paper a simple 2D model of a flux pump is used to determine the pumping behaviour from first principles and the geometry. It is analysed with finite element software using the A formulation and FlexPDE. A thin magnet is passed across one or more superconductors connected to a load, which is a large rectangular loop. This means that the self and mutual inductances can be calculated explicitly. A wide strip, a narrow strip and two conductors are considered. Also an analytic circuit model is analysed. In all cases the critical state model is used, so the flux flow resistivity and dynamic resistivity are not directly involved, although an effective resistivity appears when J c is exceeded. In most of the cases considered here is a large gap between the theory and the experiments. In particular the maximum flux transferred to the load area is always less than the flux of the magnet. Also once the threshold needed for pumping is exceeded the flux in the load saturates within a few cycles. However the analytic circuit model allows a simple modification to allow for the large reduction in I c when the magnet is over a conductor. This not only changes the direction of the pumped flux but leads to much more effective pumping.
Estimating surface fluxes using eddy covariance and numerical ogive optimization
DEFF Research Database (Denmark)
Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling
2015-01-01
Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low......-frequency contributions interfere with our ability to isolate local biogeochemical processes of interest, as represented by turbulent fluxes. No method currently exists to disentangle low-frequency contributions on flux estimates. Here, we present a novel comprehensive numerical scheme to identify and separate out low......-frequency contributions to vertical turbulent surface fluxes. For high flux rates (|Sensible heat flux| > 40Wm-2, |latent heat flux|> 20Wm-2 and |CO2 flux|> 100 mmolm-2 d-1/ we found that the average relative difference between fluxes estimated by ogive optimization and the conventional method was low (5–20 %) suggesting...
A model for heliospheric flux-ropes
Nieves-Chinchilla, T.; Linton, M.; Vourlidas, A.; Hidalgo, M. A. U.
2017-12-01
This work is presents an analytical flux-rope model, which explores different levels of complexity starting from a circular-cylindrical geometry. The framework of this series of models was established by Nieves-Chinchilla et al. 2016 with the circular-cylindrical analytical flux rope model. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in a non-orthogonal geometry. The Maxwell equations are solved using tensor calculus consistent with the geometry chosen, invariance along the axial direction, and with the assumption of no radial current density. The model is generalized in terms of the radial and azimuthal dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for several example profiles of the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. For reconstruction of the heliospheric flux-ropes, the circular-cylindrical reconstruction technique has been adapted to the new geometry and applied to in situ ICMEs with a flux-rope entrained and tested with cases with clear in situ signatures of distortion. The model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures that should be evaluated with the ultimate goal of reconciling in-situ reconstructions with imaging 3D remote sensing CME reconstructions. Other effects such as axial curvature and/or expansion could be incorporated in the future to fully understand the magnetic structure.
Production flux of sea spray aerosol
Energy Technology Data Exchange (ETDEWEB)
de Leeuw, G.; Lewis, E.; Andreas, E. L.; Anguelova, M. D.; Fairall, C. W.; O’Dowd, C.; Schulz, M.; Schwartz, S. E.
2011-05-07
Knowledge of the size- and composition-dependent production flux of primary sea spray aerosol (SSA) particles and its dependence on environmental variables is required for modeling cloud microphysical properties and aerosol radiative influences, interpreting measurements of particulate matter in coastal areas and its relation to air quality, and evaluating rates of uptake and reactions of gases in sea spray drops. This review examines recent research pertinent to SSA production flux, which deals mainly with production of particles with r{sub 80} (equilibrium radius at 80% relative humidity) less than 1 {micro}m and as small as 0.01 {micro}m. Production of sea spray particles and its dependence on controlling factors has been investigated in laboratory studies that have examined the dependences on water temperature, salinity, and the presence of organics and in field measurements with micrometeorological techniques that use newly developed fast optical particle sizers. Extensive measurements show that water-insoluble organic matter contributes substantially to the composition of SSA particles with r{sub 80} < 0.25 {micro}m and, in locations with high biological activity, can be the dominant constituent. Order-of-magnitude variation remains in estimates of the size-dependent production flux per white area, the quantity central to formulations of the production flux based on the whitecap method. This variation indicates that the production flux may depend on quantities such as the volume flux of air bubbles to the surface that are not accounted for in current models. Variation in estimates of the whitecap fraction as a function of wind speed contributes additional, comparable uncertainty to production flux estimates.
Derivative processes for modelling metabolic fluxes
Žurauskienė, Justina; Kirk, Paul; Thorne, Thomas; Pinney, John; Stumpf, Michael
2014-01-01
Motivation: One of the challenging questions in modelling biological systems is to characterize the functional forms of the processes that control and orchestrate molecular and cellular phenotypes. Recently proposed methods for the analysis of metabolic pathways, for example, dynamic flux estimation, can only provide estimates of the underlying fluxes at discrete time points but fail to capture the complete temporal behaviour. To describe the dynamic variation of the fluxes, we additionally require the assumption of specific functional forms that can capture the temporal behaviour. However, it also remains unclear how to address the noise which might be present in experimentally measured metabolite concentrations. Results: Here we propose a novel approach to modelling metabolic fluxes: derivative processes that are based on multiple-output Gaussian processes (MGPs), which are a flexible non-parametric Bayesian modelling technique. The main advantages that follow from MGPs approach include the natural non-parametric representation of the fluxes and ability to impute the missing data in between the measurements. Our derivative process approach allows us to model changes in metabolite derivative concentrations and to characterize the temporal behaviour of metabolic fluxes from time course data. Because the derivative of a Gaussian process is itself a Gaussian process, we can readily link metabolite concentrations to metabolic fluxes and vice versa. Here we discuss how this can be implemented in an MGP framework and illustrate its application to simple models, including nitrogen metabolism in Escherichia coli. Availability and implementation: R code is available from the authors upon request. Contact: j.norkunaite@imperial.ac.uk; m.stumpf@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24578401
Portable Health Algorithms Test System
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Measuring Convective Mass Fluxes Over Tropical Oceans
Raymond, David
2017-04-01
Deep convection forms the upward branches of all large-scale circulations in the tropics. Understanding what controls the form and intensity of vertical convective mass fluxes is thus key to understanding tropical weather and climate. These mass fluxes and the corresponding conditions supporting them have been measured by recent field programs (TPARC/TCS08, PREDICT, HS3) in tropical disturbances considered to be possible tropical storm precursors. In reality, this encompasses most strong convection in the tropics. The measurements were made with arrays of dropsondes deployed from high altitude. In some cases Doppler radar provided additional measurements. The results are in some ways surprising. Three factors were found to control the mass flux profiles, the strength of total surface heat fluxes, the column-integrated relative humidity, and the low to mid-tropospheric moist convective instability. The first two act as expected, with larger heat fluxes and higher humidity producing more precipitation and stronger lower tropospheric mass fluxes. However, unexpectedly, smaller (but still positive) convective instability produces more precipitation as well as more bottom-heavy convective mass flux profiles. Furthermore, the column humidity and the convective instability are anti-correlated, at least in the presence of strong convection. On spatial scales of a few hundred kilometers, the virtual temperature structure appears to be in dynamic balance with the pattern of potential vorticity. Since potential vorticity typically evolves on longer time scales than convection, the potential vorticity pattern plus the surface heat fluxes then become the immediate controlling factors for average convective properties. All measurements so far have taken place in regions with relatively flat sea surface temperature (SST) distributions. We are currently seeking funding for a measurement program in the tropical east Pacific, a region that exhibits strong SST gradients and
Turbulent Fogwater Flux Measurements Above A Forest
Burkard, R.; Eugster, W.; Buetzberger, P.; Siegwolf, R.
Many forest ecosystems in elevated regions receive a significant fraction of their wa- ter and nutrient input by the interception of fogwater. Recently, several studies have demonstrated the suitability of the eddy covariance technique for the direct measure- ment of turbulent liquid water fluxes. Since summer 2001 a fogwater flux measure- ment equipment has been running at a montane site above a mixed forest canopy in Switzerland. The measurement equipment consists of a high-speed size-resolving droplet spectrometer and a three-dimensional ultrasonic anemometer. The chemical composition of the fogwater was determined from samples collected with a modified Caltech active strand collector. The deposition of nutrients by fog (occult deposition) was calculated by multiplying the total fogwater flux (total of measured turbulent and calculated gravitational flux) during each fog event by the ionic concentrations found in the collected fogwater. Several uncertainties still exist as far as the accuracy of the measurements is con- cerned. Although there is no universal statistical approach for testing the quality of the liquid water flux data directly, results of independent data quality checks of the two time series involved in the flux computation and accordingly the two instruments (ultrasonic anemometer and the droplet spectrometer) are presented. Within the measurement period, over 80 fog events with a duration longer than 2.5 hours were analyzed. An enormous physical and chemical heterogeneity among these fog events was found. We assume that some of this heterogeneity is due to the fact that fog or cloud droplets are not conservative entities: the turbulent flux of fog droplets, which can be referred to as the liquid water flux, is affected by phase change processes and coagulation. The measured coexistence of upward fluxes of small fog droplets (di- ameter < 10 µm) with the downward transport of larger droplets indicates the influ- ence of such processes. With the
Learning from nature: Nature-inspired algorithms
DEFF Research Database (Denmark)
Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin
2016-01-01
During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...
Neutron flux measurement utilizing Campbell technique
International Nuclear Information System (INIS)
Kropik, M.
2000-01-01
Application of the Campbell technique for the neutron flux measurement is described in the contribution. This technique utilizes the AC component (noise) of a neutron chamber signal rather than a usually used DC component. The Campbell theorem, originally discovered to describe noise behaviour of valves, explains that the root mean square of the AC component of the chamber signal is proportional to the neutron flux (reactor power). The quadratic dependence of the reactor power on the root mean square value usually permits to accomplish the whole current power range of the neutron flux measurement by only one channel. Further advantage of the Campbell technique is that large pulses of the response to neutrons are favoured over small pulses of the response to gamma rays in the ratio of their mean square charge transfer and thus, the Campbell technique provides an excellent gamma rays discrimination in the current operational range of a neutron chamber. The neutron flux measurement channel using state of the art components was designed and put into operation. Its linearity, accuracy, dynamic range, time response and gamma discrimination were tested on the VR-1 nuclear reactor in Prague, and behaviour under high neutron flux (accident conditions) was tested on the TRIGA nuclear reactor in Vienna. (author)
CO2 flux from Javanese mud volcanism
Burton, M. R.; Arzilli, F.; Chiarugi, A.; Marliyani, G. I.; Anggara, F.; Harijoko, A.
2017-01-01
Abstract Studying the quantity and origin of CO2 emitted by back‐arc mud volcanoes is critical to correctly model fluid‐dynamical, thermodynamical, and geochemical processes that drive their activity and to constrain their role in the global geochemical carbon cycle. We measured CO2 fluxes of the Bledug Kuwu mud volcano on the Kendeng Fold and thrust belt in the back arc of Central Java, Indonesia, using scanning remote sensing absorption spectroscopy. The data show that the expelled gas is rich in CO2 with a volume fraction of at least 16 vol %. A lower limit CO2 flux of 1.4 kg s−1 (117 t d−1) was determined, in line with the CO2 flux from the Javanese mud volcano LUSI. Extrapolating these results to mud volcanism from the whole of Java suggests an order of magnitude total CO2 flux of 3 kt d−1, comparable with the expected back‐arc efflux of magmatic CO2. After discussing geochemical, geological, and geophysical evidence we conclude that the source of CO2 observed at Bledug Kuwu is likely a mixture of thermogenic, biogenic, and magmatic CO2, with faulting controlling potential pathways for magmatic fluids. This study further demonstrates the merit of man‐portable active remote sensing instruments for probing natural gas releases, enabling bottom‐up quantification of CO2 fluxes. PMID:28944134
CO2 flux from Javanese mud volcanism
Queißer, M.; Burton, M. R.; Arzilli, F.; Chiarugi, A.; Marliyani, G. I.; Anggara, F.; Harijoko, A.
2017-06-01
Studying the quantity and origin of CO2 emitted by back-arc mud volcanoes is critical to correctly model fluid-dynamical, thermodynamical, and geochemical processes that drive their activity and to constrain their role in the global geochemical carbon cycle. We measured CO2 fluxes of the Bledug Kuwu mud volcano on the Kendeng Fold and thrust belt in the back arc of Central Java, Indonesia, using scanning remote sensing absorption spectroscopy. The data show that the expelled gas is rich in CO2 with a volume fraction of at least 16 vol %. A lower limit CO2 flux of 1.4 kg s-1 (117 t d-1) was determined, in line with the CO2 flux from the Javanese mud volcano LUSI. Extrapolating these results to mud volcanism from the whole of Java suggests an order of magnitude total CO2 flux of 3 kt d-1, comparable with the expected back-arc efflux of magmatic CO2. After discussing geochemical, geological, and geophysical evidence we conclude that the source of CO2 observed at Bledug Kuwu is likely a mixture of thermogenic, biogenic, and magmatic CO2, with faulting controlling potential pathways for magmatic fluids. This study further demonstrates the merit of man-portable active remote sensing instruments for probing natural gas releases, enabling bottom-up quantification of CO2 fluxes.
Neutron flux enhancement in the NRAD reactor
International Nuclear Information System (INIS)
Weeks, A.A.; Heidel, C.C.; Imel, G.R.
1988-01-01
In 1987 a series of experiments were conducted at the NRAD reactor facility at Argonne National Laboratory - West (ANL-W) to investigate the possibility of increasing the thermal neutron content at the end of the reactor's east beam tube through the use of hydrogenous flux traps. It was desired to increase the thermal flux for a series of experiments to be performed in the east radiography cell, in which the enhanced flux was required in a relatively small volume. Hence, it was feasible to attempt to focus the cross section of the beam to a smaller area. Two flux traps were constructed from unborated polypropylene and tested to determine their effectiveness. Both traps were open to the entire cross-sectional area of the neutron beam (as it emerges from the wall and enters the beam room). The sides then converged such that at the end of the trap the beam would be 'focused' to a greater intensity. The differences in the two flux traps were primarily in length, and hence angle to the beam as the inlet and outlet cross-sectional areas were held constant. It should be noted that merely placing a slab of polypropylene in the beam will not yield significant multiplication as neutrons are primarily scattered away
The Flux Database Concerted Action (invited paper)
International Nuclear Information System (INIS)
Mitchell, N.G.; Donnelly, C.E.
2000-01-01
The background to the IUR action on the development of a flux database for radionuclide transfer in soil-plant systems is summarised. The action is discussed in terms of the objectives, the deliverables and the progress achieved by the flux database working group. The paper describes the background to the current initiative, outlines specific features of the database and supporting documentation, and presents findings from the working group's activities. The aim of the IUR flux database working group is to bring together researchers to collate data from current experimental studies investigating aspects of radionuclide transfer in soil-plant systems. The database will incorporate parameters describing the time-dependent transfer of radionuclides between soil, plant and animal compartments. Work under the EC Concerted Action considers soil-plant interactions. This initiative has become known as the radionuclide flux database. It is emphasised that the word flux is used in this case simply to indicate the flow of radionuclides between compartments in time. (author)
Complex networks an algorithmic perspective
Erciyes, Kayhan
2014-01-01
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Instance-specific algorithm configuration
Malitsky, Yuri
2014-01-01
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Algorithms Design Techniques and Analysis
Alsuwaiyel, M H
1999-01-01
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi
Subcubic Control Flow Analysis Algorithms
DEFF Research Database (Denmark)
Midtgaard, Jan; Van Horn, David
We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...
Use of Data Mining and Computer Vision Algorithms in Studies of Magnetic Reconnection
Sipes, T.; Karimabadi, H.; Gosling, J. T.; Phan, T.; Yilmaz, A.
2011-12-01
Knowledge discovery from large data sets collected from spacecraft measurements as well as petascale simulations remains a major obstacle to scientific progress. For example, our recent 3D kinetic simulation of reconnection included over 3 trillion particles and generated well over 200 TB of data. Similarly identification of interesting features in spacecraft data can be quite time consuming and by definition focuses on simpler features as human eye has limited capability in deciphering complex patterns and dependencies. Machine learning algorithms offer a solution to this problem. Here we present our latest results on use of machine learning algorithms in analysis of (i) 2D and 3D kinetic simulations of reconnection and (ii) reconnection events in the solar wind using Wind data. The results are quite promising and point to the power of these techniques to find hidden relationships. For example, identification of flux ropes in the solar wind remains quite controversial since unlike the magnetopause where one can search for bipolar signatures of the magnetic field component in the boundary normal coordinates, there are no generally agreed upon method of identifying them. As a preparation for this, we show results of our technique applied to time series generated from simulations of flux ropes. We find that the algorithms were not only able to detect flux ropes in the simulation data very accurately, but they were also able to distinguish crossings across a flux rope versus those along the axis of a flux rope. In case of spacecraft data, our models were able to detect crossings of the reconnection exhausts and distinguish them from non-exhausts. Finally, we use machine learning algorithms to compare the crossings of reconnection exhausts from simulations and spacecraft observations in the solar wind.
Borrero, Ernesto E; Escobedo, Fernando A
2008-07-14
In this work, we present an adaptive algorithm to optimize the phase space sampling for simulations of rare events in complex systems via forward flux sampling (FFS) schemes. In FFS, interfaces are used to partition the phase space along an order parameter lambda connecting the initial and final regions of interest. Since the kinetic "bottleneck" regions along the order parameter are not usually known beforehand, an adaptive procedure is used that first finds these regions by estimating the rate constants associated with reaching subsequent interfaces; thereafter, the FFS simulation is reset to concentrate the sampling on those bottlenecks. The approach can optimize for either the number and position of the interfaces (i.e., optimized lambda phase staging) or the number M of fired trial runs per interface (i.e., the {M(i)} set) to minimize the statistical error in the rate constant estimation per simulation period. For example, the optimization of the lambda staging leads to a net constant flux of partial trajectories between interfaces and hence a constant flux of connected paths throughout the region between the two end states. The method is demonstrated for several test systems, including the folding of a lattice protein. It is shown that the proposed approach leads to an optimized lambda staging and {M(i)} set which increase the computational efficiency of the sampling algorithm.
Fast neutron flux analyzer with real-time digital pulse shape discrimination
Energy Technology Data Exchange (ETDEWEB)
Ivanova, A.A., E-mail: a.a.ivanova@inp.nsk.su [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Zubarev, P.V. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State Technical University, 630092 Novosibirsk (Russian Federation); Ivanenko, S.V. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Khilchenko, A.D. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State Technical University, 630092 Novosibirsk (Russian Federation); Kotelnikov, A.I. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Polosatkin, S.V. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State Technical University, 630092 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Puryga, E.A.; Shvyrev, V.G. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State Technical University, 630092 Novosibirsk (Russian Federation); Sulyaev, Yu.S. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation)
2016-08-11
Investigation of subthermonuclear plasma confinement and heating in magnetic fusion devices such as GOL–3 and GDT at the Budker Institute (Novosibirsk, Russia) requires sophisticated equipment for neutron-, gamma- diagnostics and upgrading data acquisition systems with online data processing. Measurement of fast neutron flux with stilbene scintillation detectors raised the problem of discrimination of the neutrons (n) from background cosmic particles (muons) and neutron-induced gamma rays (γ). This paper describes a fast neutron flux analyzer with real-time digital pulse-shape discrimination (DPSD) algorithm FPGA-implemented for the GOL–3 and GDT devices. This analyzer was tested and calibrated with the help of {sup 137}Cs and {sup 252}Cf radiation sources. The Figures of Merit (FOM) calculated for different energy cuts are presented. - Highlights: • Electronic equipment for measurement of fast neutron flux with stilbene scintillator is presented. • FPGA-implemented digital pulse-shape discrimination algorithm by charge comparison method is shown. • Calibration of analyzer was carried out with {sup 137}Cs and {sup 252}Cf. • Figures of Merit (FOM) values for energy cuts from 1/8 Cs to 2 Cs are from 1.264 to 2.34 respectively.
#FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media.
Zhao, Jian; Cao, Nan; Wen, Zhen; Song, Yale; Lin, Yu-Ru; Collins, Christopher
2014-12-01
We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.
Simplified Fuzzy Control for Flux-Weakening Speed Control of IPMSM Drive
Directory of Open Access Journals (Sweden)
M. J. Hossain
2011-01-01
Full Text Available This paper presents a simplified fuzzy logic-based speed control scheme of an interior permanent magnet synchronous motor (IPMSM above the base speed using a flux-weakening method. In this work, nonlinear expressions of d-axis and q-axis currents of the IPMSM have been derived and subsequently incorporated in the control algorithm for the practical purpose in order to implement fuzzy-based flux-weakening strategy to operate the motor above the base speed. The fundamentals of fuzzy logic algorithms as related to motor control applications are also illustrated. A simplified fuzzy speed controller (FLC for the IPMSM drive has been designed and incorporated in the drive system to maintain high performance standards. The efficacy of the proposed simplified FLC-based IPMSM drive is verified by simulation at various dynamic operating conditions. The simplified FLC is found to be robust and efficient. Laboratory test results of proportional integral (PI controller-based IPMSM drive have been compared with the simulated results of fuzzy controller-based flux-weakening IPMSM drive system.
About Merging Threshold and Critical Flux Concepts into a Single One: The Boundary Flux
Directory of Open Access Journals (Sweden)
Marco Stoller
2014-01-01
Full Text Available In the last decades much effort was put in understanding fouling phenomena on membranes. One successful approach to describe fouling issues on membranes is the critical flux theory. The possibility to measure a maximum value of the permeate flux for a given system without incurring in fouling issues was a breakthrough in membrane process design. However, in many cases critical fluxes were found to be very low, lower than the economic feasibility of the process. The knowledge of the critical flux value must be therefore considered as a good starting point for process design. In the last years, a new concept was introduced, the threshold flux, which defines the maximum permeate flow rate characterized by a low constant fouling rate regime. This concept, more than the critical flux, is a new practical tool for membrane process designers. In this paper a brief review on critical and threshold flux will be reported and analyzed. And since the concepts share many common aspects, merged into a new concept, called the boundary flux, the validation will occur by the analysis of previously collected data by the authors, during the treatment of olive vegetation wastewater by ultrafiltration and nanofiltration membranes.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Eddy covariance based methane flux in Sundarbans mangroves, India
Indian Academy of Sciences (India)
Eddy covariance based methane flux in Sundarbans mangroves, India ... Eddy covariance; mangrove forests; methane flux; Sundarbans. ... In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans ...
Fluxes of nitrogen in Chaliyar River Estuary, India
Digital Repository Service at National Institute of Oceanography (India)
Xavier, J.K.; Joseph, T.; Paimpillii, J.S.
the instantaneous nutrient fluxes. The net fluxes are the algebraic sums of the instantaneous fluxes over the tidal cycle sampled divided by the number of observations in the tidal cycle. Net fluxes for all the stations with its cross sectional averages for each...
Generating energy dependent neutron flux maps for effective ...
African Journals Online (AJOL)
For activation analysis and irradiation scheme of miniature neutron source reactor, designers or engineers usually require information on thermal neutron flux levels and other energy group flux levels (such as fast, resonance and epithermal). A methodology for readily generating such flux maps and flux profiles for any ...
Non-geometric fluxes and mixed-symmetry potentials
Bergshoeff, E.A.; Penas, V.A.; Riccioni, F.; Risoli, S.
2015-01-01
We discuss the relation between generalised fluxes and mixed-symmetry potentials. We refer to the fluxes that cannot be described even locally in the framework of supergravity as ‘non-geometric’. We first consider the NS fluxes, and point out that the non-geometric R flux is dual to a mixed-symmetry
Neutron-diffraction investigations of flux-lines in superconductors
Energy Technology Data Exchange (ETDEWEB)
Forgan, E.M. [Birmingham Univ. (United Kingdom); Lee, S.L. [Saint Andrews Univ. (United Kingdom); McKPaul, D. [Warwick Univ., Coventry (United Kingdom); Mook, H.A. [Oak Ridge National Lab., TN (United States); Cubitt, R. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)
1997-04-01
SANS has proved an extremely useful tool for investigating flux-line structures within the bulk of superconductors. With high-T{sub c} materials, the scattered intensities are weak, but careful measurements are giving important new information about flux lattices, flux pinning and flux-lattice melting. (author). 10 refs.
Standardized Automated CO2/H2O Flux Systems for Individual Research Groups and Flux Networks
Burba, George; Begashaw, Israel; Fratini, Gerardo; Griessbaum, Frank; Kathilankal, James; Xu, Liukang; Franz, Daniela; Joseph, Everette; Larmanou, Eric; Miller, Scott; Papale, Dario; Sabbatini, Simone; Sachs, Torsten; Sakai, Ricardo; McDermitt, Dayle
2017-04-01
In recent years, spatial and temporal flux data coverage improved significantly, and on multiple scales, from a single station to continental networks, due to standardization, automation, and management of data collection, and better handling of the extensive amounts of generated data. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are required to effectively and efficiently handle the entire process. Such tools are needed to maximize time dedicated to authoring publications and answering research questions, and to minimize time and expenses spent on data acquisition, processing, and quality control. Thus, these tools should produce standardized verifiable datasets and provide a way to cross-share the standardized data with external collaborators to leverage available funding, promote data analyses and publications. LI-COR gas analyzers are widely used in past and present flux networks such as AmeriFlux, ICOS, AsiaFlux, OzFlux, NEON, CarboEurope, and FluxNet-Canada, etc. These analyzers have gone through several major improvements over the past 30 years. However, in 2016, a three-prong development was completed to create an automated flux system which can accept multiple sonic anemometer and datalogger models, compute final and complete fluxes on-site, merge final fluxes with supporting weather soil and radiation data, monitor station outputs and send automated alerts to researchers, and allow secure sharing and cross-sharing of the station and data access. Two types of these research systems were developed: open-path (LI-7500RS) and enclosed-path (LI-7200RS). Key developments included: • Improvement of gas analyzer performance • Standardization and automation of final flux calculations onsite, and in real-time • Seamless integration with latest site management and data sharing tools In terms of the gas analyzer performance, the RS analyzers are based on established LI-7500/A and LI-7200
Median filtering algorithms for multichannel detectors
Hovhannisyan, A.; Chilingarian, A.
2011-05-01
Particle detectors of worldwide networks are continuously measuring various secondary particle fluxes incident on Earth surface. At the Aragats Space Environmental Center (ASEC), the data of 12 cosmic ray particle detectors with a total of ˜280 measuring channels (count rates of electrons, muons and neutrons channels) are sent each minute via wireless bridges to a MySQL database. These time series are used for the different tasks of off-line physical analysis and for online forewarning services. Usually long time series contain several types of errors (gaps due to failures of high or low voltage power supply, spurious spikes due to radio interferences, abrupt changes of mean values of several channels or/and slowly trends in mean values due to aging of electronics components, etc.). To avoid erroneous physical inference and false alarms of alerting systems we introduce offline and online filters to "purify" multiple time-series. In the presented paper we classify possible mistakes in time series and introduce median filtering algorithms for online and off-line "purification" of multiple time-series.
Characterization of ion fluxes and heat fluxes for PMI relevant conditions on Proto-MPEX
Beers, Clyde; Shaw, Guinevere; Biewer, Theodore; Rapp, Juergen
2016-10-01
Plasma characterization, in particular, particle flux and electron and ion temperature distributions nearest to an exposed target, are critical to quantifying Plasma Surface Interaction (PSI). In the Proto-Material Plasma Exposure eXperiment (Proto-MPEX), the ion fluxes and heat fluxes are derived from double Langmuir Probes (DLP) and Thomson Scattering in front of the target assuming Bohm conditions at the sheath entrance. Power fluxes derived from ne and Te measurements are compared to heat fluxes measured with IR thermography. The comparison will allow conclusions on the sheath heat transmission coefficient to be made experimentally. Different experimental conditions (low and high density plasmas (0.5 - 6 x 1019 m-3) with different magnetic configuration are compared. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.
Automated flux chamber for investigating gas flux at water-air interfaces.
Duc, Nguyen Thanh; Silverstein, Samuel; Lundmark, Lars; Reyier, Henrik; Crill, Patrick; Bastviken, David
2013-01-15
Aquatic ecosystems are major sources of greenhouse gases (GHG). Representative measurements of GHG fluxes from aquatic ecosystems to the atmosphere are vital for quantitative understanding of relationships between biogeochemistry and climate. Fluxes occur at high temporal variability at diel or longer scales, which are not captured by traditional short-term deployments (often in the order of 30 min) of floating flux chambers. High temporal frequency measurements are necessary but also extremely labor intensive if manual flux chamber based methods are used. Therefore, we designed an inexpensive and easily mobile automated flux chamber (AFC) for extended deployments. The AFC was designed to measure in situ accumulation of gas in the chamber and also to collect gas samples in an array of sample bottles for subsequent analysis in the laboratory, providing two independent ways of CH(4) concentration measurements. We here present the AFC design and function together with data from initial laboratory tests and from a field deployment.
Freezing E3-brane instantons with fluxes
Energy Technology Data Exchange (ETDEWEB)
Bianchi, M.; Martucci, L. [Dipartimento di Fisica, Universita di Roma Tor Vergata (Italy); I.N.F.N., Sezione di Roma Tor Vergata (Italy); Collinucci, A. [Theory Group, Physics Department, CERN, Geneva (Switzerland); Physique Theorique et Mathematique Universite Libre de Bruxelles (Belgium)
2012-07-15
E3-instantons that generate non-perturbative superpotentials in IIB N = 1 compactifications have a much more frequent occurrence than currently believed. Worldvolume fluxes will typically lift the E3-brane geometric moduli and their fermionic superpartners, leaving only the two required universal fermionic zero-modes. We consistently incorporate SL(2,Z) monodromies and world-volume fluxes in the effective theory of the E3-brane fermions and study the resulting zero modes spectrum, highlighting the relation between F-theory and perturbative IIB results. This leads us to a IIB derivation of the index for generation of superpotential terms, which reproduces and generalizes available results. Furthermore, we show how E3 worldvolume fluxes can be explicitly constructed in a one-modulus compactification, such that the instanton has exactly two fermonic zero-modes. This construction is readily applicable to numerous scenarios. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Heat-Flux Gage thermophosphor system
Energy Technology Data Exchange (ETDEWEB)
Tobin, K.W.
1991-08-01
This document describes the installation, hardware requirements, and application of the Heat-Flux Gage (Version 1.0) software package developed by the Oak Ridge National Laboratory, Applied Technology Division. The developed software is a single component of a thermographic phosphor-based temperature and heat-flux measurement system. The heat-flux transducer was developed by EG G Energy Measurements Systems and consists of a 1- by 1-in. polymethylpentene sheet coated on the front and back with a repeating thermographic phosphor pattern. The phosphor chosen for this application is gadolinium oxysulphide doped with terbium. This compound has a sensitive temperature response from 10 to 65.6{degree}C (50--150{degree}F) for the 415- and 490-nm spectral emission lines. 3 refs., 17 figs.
Comic ray flux anisotropies caused by astrospheres
Scherer, K.; Strauss, R. D.; Ferreira, S. E. S.; Fichtner, H.
2016-09-01
Huge astrospheres or stellar wind bubbles influence the propagation of cosmic rays at energies up to the TeV range and can act as small-scale sinks decreasing the cosmic ray flux. We model such a sink (in 2D) by a sphere of radius 10 pc embedded within a sphere of a radius of 1 kpc. The cosmic ray flux is calculated by means of backward stochastic differential equations from an observer, which is located at r0, to the outer boundary. It turns out that such small-scale sinks can influence the cosmic ray flux at the observer's location by a few permille (i.e. a few 0.1%), which is in the range of the observations by IceCube, Milagro and other large area telescopes.
From Hubble's NGSL to Absolute Fluxes
Heap, Sara R.; Lindler, Don
2012-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.
The FLUKA atmospheric neutrino flux calculation
Battistoni, G.; Montaruli, T.; Sala, P.R.
2003-01-01
The 3-dimensional (3-D) calculation of the atmospheric neutrino flux by means of the FLUKA Monte Carlo model is here described in all details, starting from the latest data on primary cosmic ray spectra. The importance of a 3-D calculation and of its consequences have been already debated in a previous paper. Here instead the focus is on the absolute flux. We stress the relevant aspects of the hadronic interaction model of FLUKA in the atmospheric neutrino flux calculation. This model is constructed and maintained so to provide a high degree of accuracy in the description of particle production. The accuracy achieved in the comparison with data from accelerators and cross checked with data on particle production in atmosphere certifies the reliability of shower calculation in atmosphere. The results presented here can be already used for analysis by current experiments on atmospheric neutrinos. However they represent an intermediate step towards a final release, since this calculation does not yet include the...
MHD energy fluxes for late type dwarfs
Rosner, R.; Musielak, Z. E.
1987-01-01
The efficiency of MHD wave generation by turbulent motions in stratified stellar atmospheres with embedded uniform magnetic fields is calculated. In contradiction with previous results, it is shown that there is no significant increase in the efficiency of wave generation because of the presence of magnetic fields, at least within the theory's limits of applicability. It is shown that MHD energy fluxes for late-type stars are less than those obtained for acoustic waves in a magnetic-field-free atmosphere, and do not vary enough for a given spectral type in order to explain observed UV and X-ray fluxes. Thus, the results show that MHD energy fluxes obtained if stellar surface magnetic fields are uniform cannot explain the observed stellar coronal emissions.
Color magnetic flux tubes in dense QCD
International Nuclear Information System (INIS)
Eto, Minoru; Nitta, Muneto
2009-01-01
QCD is expected to be in the color-flavor locking phase in high baryon density, which exhibits color superconductivity. The most fundamental topological objects in the color superconductor are non-Abelian vortices which are topologically stable color magnetic flux tubes. We present numerical solutions of the color magnetic flux tube for diverse choices of the coupling constants based on the Ginzburg-Landau Lagrangian. We also analytically study its asymptotic profiles and find that they are different from the case of usual superconductors. We propose the width of color magnetic fluxes and find that it is larger than naive expectation of the Compton wavelength of the massive gluon when the gluon mass is larger than the scalar mass.
Real Time Flux Control in PM Motors
Energy Technology Data Exchange (ETDEWEB)
Otaduy, P.J.
2005-09-27
Significant research at the Oak Ridge National Laboratory (ORNL) Power Electronics and Electric Machinery Research Center (PEEMRC) is being conducted to develop ways to increase (1) torque, (2) speed range, and (3) efficiency of traction electric motors for hybrid electric vehicles (HEV) within existing current and voltage bounds. Current is limited by the inverter semiconductor devices' capability and voltage is limited by the stator wire insulation's ability to withstand the maximum back-electromotive force (emf), which occurs at the upper end of the speed range. One research track has been to explore ways to control the path and magnitude of magnetic flux while the motor is operating. The phrase, real time flux control (RTFC), refers to this mode of operation in which system parameters are changed while the motor is operating to improve its performance and speed range. RTFC has potential to meet an increased torque demand by introducing additional flux through the main air gap from an external source. It can augment the speed range by diverting flux away from the main air gap to reduce back-emf at high speeds. Conventional RTFC technology is known as vector control [1]. Vector control decomposes the stator current into two components; one that produces torque and a second that opposes (weakens) the magnetic field generated by the rotor, thereby requiring more overall stator current and reducing the efficiency. Efficiency can be improved by selecting a RTFC method that reduces the back-emf without increasing the average current. This favors methods that use pulse currents or very low currents to achieve field weakening. Foremost in ORNL's effort to develop flux control is the work of J. S. Hsu. Early research [2,3] introduced direct control of air-gap flux in permanent magnet (PM) machines and demonstrated it with a flux-controlled generator. The configuration eliminates the problem of demagnetization because it diverts all the flux from the
Open string wavefunctions in flux compactifications
Cámara, Pablo G
2009-01-01
We consider compactifications of type I supergravity on manifolds with SU(3) structure, in the presence of RR fluxes and magnetized D9-branes, and analyze the generalized Dirac and Laplace-Beltrami operators associated to the D9-brane worldvolume fields. These compactifications are T-dual to standard type IIB toroidal orientifolds with NSNS and RR 3-form fluxes and D3/D7 branes. By using techniques of representation theory and harmonic analysis, the spectrum of open string wavefunctions can be computed for Lie groups and their quotients, as we illustrate with explicit twisted tori examples. We find a correspondence between irreducible unitary representations of the Kaloper-Myers algebra and families of Kaluza-Klein excitations. We perform the computation of 2- and 3-point couplings for matter fields in the above flux compactifications, and compare our results with those of 4d effective supergravity.
U-dual fluxes and Generalized Geometry
Aldazabal, G; Camara, Pablo G; Grana, M
2010-01-01
We perform a systematic analysis of generic string flux compactifications, making use of Exceptional Generalized Geometry (EGG) as an organizing principle. In particular, we establish the precise map between fluxes, gaugings of maximal 4d supergravity and EGG, identifying the complete set of gaugings that admit an uplift to 10d heterotic or type IIB supegravity backgrounds. Our results reveal a rich structure, involving new deformations of 10d supergravity backgrounds, such as the RR counterparts of the $\\beta$-deformation. These new deformations are expected to provide the natural extension of the $\\beta$-deformation to full-fledged F-theory backgrounds. Our analysis also provides some clues on the 10d origin of some of the particularly less understood gaugings of 4d supergravity. Finally, we derive the explicit expression for the effective superpotential in arbitrary N = 1 heterotic or type IIB orientifold compactifications, for all the allowed fluxes.
Hamiltonian boundary term and quasilocal energy flux
International Nuclear Information System (INIS)
Chen, C.-M.; Nester, James M.; Tung, R.-S.
2005-01-01
The Hamiltonian for a gravitating region includes a boundary term which determines not only the quasilocal values but also, via the boundary variation principle, the boundary conditions. Using our covariant Hamiltonian formalism, we found four particular quasilocal energy-momentum boundary term expressions; each corresponds to a physically distinct and geometrically clear boundary condition. Here, from a consideration of the asymptotics, we show how a fundamental Hamiltonian identity naturally leads to the associated quasilocal energy flux expressions. For electromagnetism one of the four is distinguished: the only one which is gauge invariant; it gives the familiar energy density and Poynting flux. For Einstein's general relativity two different boundary condition choices correspond to quasilocal expressions which asymptotically give the ADM energy, the Trautman-Bondi energy and, moreover, an associated energy flux (both outgoing and incoming). Again there is a distinguished expression: the one which is covariant
Evaluating Energy Flux in Vibrofluidized Granular Bed
Directory of Open Access Journals (Sweden)
N. A. Sheikh
2013-01-01
Full Text Available Granular flows require sustained input of energy for fluidization. A level of fluidization depends on the amount of heat flux provided to the flow. In general, the dissipation of the grains upon interaction balances the heat inputs and the resultant flow patterns can be described using hydrodynamic models. However, with the increase in packing fraction, the heat fluxes prediction of the cell increases. Here, a comparison is made for the proposed theoretical models against the MD simulations data. It is observed that the variation of packing fraction in the granular cell influences the heat flux at the base. For the elastic grain-base interaction, the predictions vary appreciably compared to MD simulations, suggesting the need to accurately model the velocity distribution of grains for averaging.
Warped Kähler potentials and fluxes
International Nuclear Information System (INIS)
Martucci, Luca
2017-01-01
The four-dimensional effective theory for type IIB warped flux compactifications proposed in https://www.doi.org/10.1007/JHEP03(2015)067 is completed by taking into account the backreaction of the Kähler moduli on the three-form fluxes. The only required modification consists in a flux-dependent contribution to the chiral fields parametrising the Kähler moduli. The resulting supersymmetric effective theory satisfies the no-scale condition and consistently combines previous partial results present in the literature. Similar results hold for M-theory warped compactifications on Calabi-Yau fourfolds, whose effective field theory and Kähler potential are also discussed.
Type IIB flux compactifications on twistor bundles
Energy Technology Data Exchange (ETDEWEB)
Imaanpur, Ali, E-mail: aimaanpu@modares.ac.ir
2014-02-05
We construct a U(1) bundle over N(1,1), usually considered as an SO(3) bundle on CP{sup 2}, and show that type IIB supergravity can be consistently compactified over it. With the five form flux turned on, there is a solution for which the metric becomes Einstein. We further turn on 3-form fluxes and show that there is a one parameter family of solutions. In particular, there is a limiting solution of large 3-form fluxes for which two U(1) fiber directions of the metric shrink to zero size. We also discuss compactifications over N(1,1) to AdS{sub 3}. All solutions turn out to be non-supersymmetric.
Enumerating Flux Vacua With Enhanced Symmetries
Energy Technology Data Exchange (ETDEWEB)
DeWolfe, O.
2004-11-12
We study properties of flux vacua in type IIB string theory in several simple but illustrative models. We initiate the study of the relative frequencies of vacua with vanishing superpotential W = 0 and with certain discrete symmetries. For the models we investigate we also compute the overall rate of growth of the number of vacua as a function of the D3-brane charge associated to the fluxes, and the distribution of vacua on the moduli space. The latter two questions can also be addressed by the statistical theory developed by Ashok, Denef and Douglas, and our results are in good agreement with their predictions. Analysis of the first two questions requires methods which are more number-theoretic in nature. We develop some elementary techniques of this type, which are based on arithmetic properties of the periods of the compactification geometry at the points in moduli space where the flux vacua are located.
Atmosphere–Surface Fluxes of CO2 using Spectral Techniques
DEFF Research Database (Denmark)
Sørensen, Lise Lotte; Larsen, Søren Ejling
2010-01-01
Different flux estimation techniques are compared here in order to evaluate air–sea exchange measurement methods used on moving platforms. Techniques using power spectra and cospectra to estimate fluxes are presented and applied to measurements of wind speed and sensible heat, latent heat and CO2...... fluxes. Momentum and scalar fluxes are calculated from the dissipation technique utilizing the inertial subrange of the power spectra and from estimation of the cospectral amplitude, and both flux estimates are compared to covariance derived fluxes. It is shown how even data having a poor signal......-to-noise ratio can be used for flux estimations....
Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.
2017-12-01
Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.
Adaptive Maneuvering Target Tracking Algorithm
Directory of Open Access Journals (Sweden)
Chunling Wu
2014-07-01
Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Designing algorithms using CAD technologies
Directory of Open Access Journals (Sweden)
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Efficient Algorithms for Subgraph Listing
Directory of Open Access Journals (Sweden)
Niklas Zechner
2014-05-01
Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.
Can Polar Fields Explain Missing Open Flux?
Linker, J.; Downs, C.; Caplan, R. M.; Riley, P.; Mikic, Z.; Lionello, R.
2017-12-01
The "open" magnetic field is the portion of the Sun's magnetic field that extends out into the heliosphere and becomes the interplanetary magnetic field (IMF). Both the IMF and the Sun's magnetic field in the photosphere have been measured for many years. In the standard paradigm of coronal structure, the open magnetic field originates primarily in coronal holes. The regions that are magnetically closed trap the coronal plasma and give rise to the streamer belt. This basic picture is qualitatively reproduced by models of coronal structure using photospheric magnetic fields as input. If this paradigm is correct, there are two primary observational constraints on the models: (1) The open field regions in the model should approximately correspond to coronal holes observed in emission, and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. Linker et al. (2017, ApJ, submitted) investigated the July 2010 time period for a range of observatory maps and both PFSS and MHD models. We found that all of the model/map combinations underestimated the interplanetary magnetic flux, unless the modeled open field regions were larger than observed coronal holes. An estimate of the open magnetic flux made entirely from solar observations (combining detected coronal hole boundaries with observatory synoptic magnetic maps) also underestimated the interplanetary magnetic flux. The magnetic field near the Sun's poles is poorly observed and may not be well represented in observatory maps. In this paper, we explore whether an underestimate of the polar magnetic flux during this time period could account for the overall underestimate of open magnetic flux. Research supported by NASA, AFOSR, and NSF.
Eddy Correlation Flux Measurement System Handbook
Energy Technology Data Exchange (ETDEWEB)
Cook, D. R. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-01-01
The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration. The instruments used are: • a fast-response, three-dimensional (3D) wind sensor (sonic anemometer) to obtain the orthogonal wind components and the speed of sound (SOS) (used to derive the air temperature) • an open-path infrared gas analyzer (IRGA) to obtain the water vapor density and the CO2 concentration, and • an open-path infrared gas analyzer (IRGA) to obtain methane density and methane flux at one SGP EF and at the NSA CF. The ECOR systems are deployed at the locations where other methods for surface flux measurements (e.g., energy balance Bowen ratio [EBBR] systems) are difficult to employ, primarily at the north edge of a field of crops. A Surface Energy Balance System (SEBS) has been installed collocated with each deployed ECOR system in SGP, NSA, Tropical Western Pacific (TWP), ARM Mobile Facility 1 (AMF1), and ARM Mobile Facility 2 (AMF2). The surface energy balance system consists of upwelling and downwelling solar and infrared radiometers within one net radiometer, a wetness sensor, and soil measurements. The SEBS measurements allow the comparison of ECOR sensible and latent heat fluxes with the energy balance determined from the SEBS and provide information on wetting of the sensors for data quality purposes. The SEBS at one SGP and one NSA site also support upwelling and downwelling PAR measurements to qualify those two locations as Ameriflux sites.
Wet Deposition Flux of Reactive Organic Carbon
Safieddine, S.; Heald, C. L.
2016-12-01
Reactive organic carbon (ROC) is the sum of non-methane volatile organic compounds (NMVOCs) and primary and secondary organic aerosols (OA). ROC plays a key role in driving the chemistry of the atmosphere, affecting the hydroxyl radical concentrations, methane lifetime, ozone formation, heterogeneous chemical reactions, and cloud formation, thereby impacting human health and climate. Uncertainties on the lifecycle of ROC in the atmosphere remain large. In part this can be attributed to the large uncertainties associated with the wet deposition fluxes. Little is known about the global magnitude of wet deposition as a sink of both gas and particle phase organic carbon, making this an important area for research and sensitivity testing in order to better understand the global ROC budget. In this study, we simulate the wet deposition fluxes of the reactive organic carbon of the troposphere using a global chemistry transport model, GEOS-Chem. We start by showing the current modeled global distribution of ROC wet deposition fluxes and investigate the sensitivity of these fluxes to variability in Henry's law solubility constants and spatial resolution. The average carbon oxidation state (OSc) is a useful metric that depicts the degree of oxidation of atmospheric reactive carbon. Here, we present for the first time the simulated gas and particle phase OSc of the global troposphere. We compare the OSc in the wet deposited reactive carbon flux and the dry deposited reactive carbon flux to the OSc of atmospheric ROC to gain insight into the degree of oxidation in deposited material and, more generally, the aging of organic material in the troposphere.
Depicting CH4 fluxes and drivers dynamics
Dengel, S.; Billesbach, D. P.; Hughes, H.; Humphreys, E.; Lee, J.; Noormets, A.; Verfaillie, J. G.
2016-12-01
Since the advancement in CH4 eddy covariance flux measurements, monitoring of CH4 emissions is becoming more widespread. Since CH4 fluxes are not as predictable or as easily interpretable as CO2 fluxes, understanding their emission patterns often still challenging. As these are spatially (ecosystem and latitudinal) and temporal very divers and often event based, a better understanding or interpretation of results is required. An improvement in understanding does also increase the reliability of gap-filling methods as annual greenhouse gas budgets rely on high quality data. There are generalised additive models (Wood 2001) that can easily be applied to sites, models where a relationship between the response variable, in this case CH4 and explanatory variables (drivers) is established. Relevant for CH4flux dynamics are the smoothing function that is applied, where each predictor variable is separated into sections and a polynomial function fitted. On the one hand such models are rarely used as they are difficult to interpret since no parameter values are retuned. On the other hand, such models are very good for prediction and explanatory analysis in estimating the functional nature of a response. Applying such models to CH4 eddy flux data does improve our understanding of the dynamics of CH4 emissions and the respective meteorological drivers. Furthermore, such models combined with tree models (interactions between the explanatory variables), can visualise precise dynamics and easily applied to individual sites. These models are simple tools in understanding of these complex fluxes, as they can include a variety of drivers, and their relevance tested by the model. Model input variables should be as independent as possible (avoiding cross-correlation), avoiding redundant inputs, as models should follow the principle of parsimony of being simple but not too simple. Wood SN (2001). mgcv: GAMs and generalized ridge regression for R. R news.
Estimation of Land Surface Fluxes and Their Uncertainty via Variational Data Assimilation Approach
Abdolghafoorian, A.; Farhadi, L.
2016-12-01
Accurate estimation of land surface heat and moisture fluxes as well as root zone soil moisture is crucial in various hydrological, meteorological, and agricultural applications. "In situ" measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state variables. In this work, we applied a novel approach based on the variational data assimilation (VDA) methodology to estimate land surface fluxes and soil moisture profile from the land surface states. This study accounts for the strong linkage between terrestrial water and energy cycles by coupling the dual source energy balance equation with the water balance equation through the mass flux of evapotranspiration (ET). Heat diffusion and moisture diffusion into the column of soil are adjoined to the cost function as constraints. This coupling results in more accurate prediction of land surface heat and moisture fluxes and consequently soil moisture at multiple depths with high temporal frequency as required in many hydrological, environmental and agricultural applications. One of the key limitations of VDA technique is its tendency to be ill-posed, meaning that a continuum of possibilities exists for different parameters that produce essentially identical measurement-model misfit errors. On the other hand, the value of heat and moisture flux estimation to decision-making processes is limited if reasonable estimates of the corresponding uncertainty are not provided. In order to address these issues, in this research uncertainty analysis will be performed to estimate the uncertainty of retrieved fluxes and root zone soil moisture. The assimilation algorithm is tested with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. We demonstrate the VDA performance by comparing the
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Autonomous algorithms for image restoration
Griniasty, Meir
1994-01-01
We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
When the greedy algorithm fails
Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders
2004-01-01
We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...
A* Algorithm for Graphics Processors
Inam, Rafia; Cederman, Daniel; Tsigas, Philippas
2010-01-01
Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...
Algorithm for programming function generators
International Nuclear Information System (INIS)
Bozoki, E.
1981-01-01
The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described
Cascade Error Projection: A New Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Diffusive flux of energy in binary mixtures
International Nuclear Information System (INIS)
Sampaio, R.S.
1976-04-01
The diffusive flux of energy j tilde is studied through the reduced diffusive flux of energy K tilde, which obeys equations of the form: sim(delta K tilde/delta grad rho sub(α))= sim(delta K tilde/delta grad theta)=0. By a representation theorem, herein proved, is obtained a general representation for K tilde which is simplified, for the case of binary mixtures, using the principle of objectivity. Some consequences of this representation are discussed such as the symmetry of the partial stresses T 1 tilde and T 2 tilde and the difference between the normal stresses [pt
Observation of a Coulomb flux tube
Greensite, Jeff; Chung, Kristian
2018-03-01
In Coulomb gauge there is a longitudinal color electric field associated with a static quark-antiquark pair. We have measured the spatial distribution of this field, and find that it falls off exponentially with transverse distance from a line joining the two quarks. In other words there is a Coulomb flux tube, with a width that is somewhat smaller than that of the minimal energy flux tube associated with the asymptotic string tension. A confinement criterion for gauge theories with matter fields is also proposed.
Atmospheric electron flux at airplane altitude
International Nuclear Information System (INIS)
Enomoto, R.; Chiba, J.; Ogawa, K.; Sumiyoshi, T.; Takasaki, F.; Kifune, T.; Matsubara, Y.; Nishimura, J.
1991-01-01
We have developed a new detector to systematically measure the cosmic-ray electron flux at airplane altitudes. We loaded a lead-glass-based electron telescope onto a commercial cargo airplane. The first experiment was carried out using the air route between Narita (Japan) and Sydney (Australia); during this flight we measured the electron flux at various altitudes and latitudes. The thresholds of the electron energies were 1, 2, and 4 GeV. The results agree with a simple estimation using one-dimensional shower theory. A comparison with a Monte Carlo calculation was made
Growth of zircaloy 4 under neutron flux
International Nuclear Information System (INIS)
Morize, P.; Baicry, J.; Morlot, G.; Sciers, P.; Lehmann, D.
1982-06-01
Between 300 and 385 0 C, and under neutron fluxes between 0.5 and 2.10 14 n/cm 2 /s, the growth of zircaloy tubes is nil in the plane perpendicular to the axis, and can be represented by the equation: Δl/l=4.6610 -14 (phit)sup(0.49) in the axial direction. In the area investigated, neither the irradiation temperature nor the instantaneous flux has any effect on the metallurgical state (relieved or recrystallized) [fr
Dual neutron flux/temperature measurement sensor
Mihalczo, John T.; Simpson, Marc L.; McElhaney, Stephanie A.
1994-01-01
Simultaneous measurement of neutron flux and temperature is provided by a single sensor which includes a phosphor mixture having two principal constituents. The first constituent is a neutron sensitive 6LiF and the second is a rare-earth activated Y203 thermophosphor. The mixture is coated on the end of a fiber optic, while the opposite end of the fiber optic is coupled to a light detector. The detected light scintillations are quantified for neutron flux determination, and the decay is measured for temperature determination.
Gravitational effects on planetary neutron flux spectra
Feldman, W. C.; Drake, D. M.; O'Dell, R. D.; Brinkley, F. W., Jr.; Anderson, R. C.
1989-01-01
The effects of gravity on the planetary neutron flux spectra for planet Mars, and the lifetime of the neutron, were investigated using a modified one-dimensional diffusion accelerated neutral-particle transport code, coupled with a multigroup cross-section library tailored specifically for Mars. The results showed the presence of a qualitatively new feature in planetary neutron leakage spectra in the form of a component of returning neutrons with kinetic energies less than the gravitational binding energy (0.132 eV for Mars). The net effect is an enhancement in flux at the lowest energies that is largest at and above the outermost layer of planetary matter.
Flux pinning characteristics of YBCO coated conductor
International Nuclear Information System (INIS)
Matsushita, T.; Watanabe, T.; Fukumoto, Y.; Yamauchi, K.; Kiuchi, M.; Otabe, E.S.; Kiss, T.; Watanabe, T.; Miyata, S.; Ibi, A.; Muroga, T.; Yamada, Y.; Shiohara, Y.
2005-01-01
Flux pinning properties of PLD-processed YBCO coated conductors deposited on IBAD substrate are investigated. The thickness of YBCO layer is changed in the range of 0.27-1.0 μm. The thickness dependence of critical current density, n-value and irreversibility field are measured in a wide range of magnetic field. The results are compared with the theoretical flux creep-flow model. It is found that these pinning properties are strongly influenced by the thickness as well as the pinning strength. Optimum condition for high field application of this superconductor is discussed
Planck intermediate results - LII. Planet flux densities
DEFF Research Database (Denmark)
Akrami, Y.; Ashdown, M.; Aumont, J.
2017-01-01
Measurements of flux density are described for five planets, Mars, Jupiter, Saturn, Uranus, and Neptune, across the six Planck High Frequency Instrument frequency bands (100–857 GHz) and these are then compared with models and existing data. In our analysis, we have also included estimates of the...... experiments. In particular, we observe that the flux densities measured by Planck HFI and WMAP agree to within 2%. These results allow experiments operating in the mm-wavelength range to cross-calibrate against Planck and improve models of radiative transport used in planetary science....
International Nuclear Information System (INIS)
Joiner, W.C.H.
1979-12-01
Flux flow noise power spectra were investigated, and information obtained through such spectra is applied to describe flux flow and pinning in situations where volume pinning force data is also available. In one case, the application of noise data to PB 80 In 20 samples after recovery and after high temperature annealing is discussed. This work is consistent with a recent model for flux flow noise generation. In the second case we discuss experiments designed to change the fluxoid transit path length, which according to the model should affect both the noise amplitude and the parameter α specifying the longest subpulse times in terms of the average transit time, tau/sub c/. Transient flux flow voltages when a current is switched on after field cycling a Pb 60 In 40 sample have been discovered. Noise spectra have been measured during the transient. These observations are discussed along with a simple model which fits the data. A surprising result is that the transient decay times increase with the applied current. Other characteristics of Pb 60 In 40 after cold working are also discussed
Rotational Invariant Dimensionality Reduction Algorithms.
Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David
2017-11-01
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.
Artificial Flora (AF Optimization Algorithm
Directory of Open Access Journals (Sweden)
Long Cheng
2018-02-01
Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.
An Overview of Flux Pumps for HTS Coils
Coombs, Timothy Arthur; Geng, Jianzhao; Fu, L; Matsuda, K
2016-01-01
High-Tc superconducting (HTS) flux pumps are capable of injecting flux into closed HTS magnets without electrical contact. It is becoming a promising alternative of current source in powering HTS coils. This paper reviews the recent progress in flux pumps for HTS coil magnets. Different types of HTS flux pumps are introduced. The physics of these flux pumps are explained and comparisons are made. J. Geng would like to acknowledge Cambridge Trust for offering Cambridge International Scholar...
Integrated passive flux measurement in groundwater: design and performance of iFLUX samplers
Verreydt, Goedele; Razaei, Meisam; Meire, Patrick; Van Keer, Ilse; Bronders, Jan; Seuntjens, Piet
2017-04-01
The monitoring and management of soil and groundwater is a challenge. Current methods for the determination of movement or flux of pollution in groundwater use no direct measurements but only simulations based on concentration measurements and Darcy velocity estimations. This entails large uncertainties which cause remediation failures and higher costs for contaminated site owners. On top of that, the lack of useful data makes it difficult to get approval for a risk-based management approach which completely avoids costly remedial actions. The iFLUX technology is a key development of Dr. Goedele Verreydt at the University of Antwerp and VITO. It is supported by the passive flux measurement technology as invented by Prof. Mike Annable and his team at the University of Florida. The iFLUX technology includes an in situ measurement device for capturing dynamic groundwater quality and quantity, the iFLUX sampler, and an associated interpretation and visualization method. The iFLUX sampler is a modular passive sampler that provides simultaneous in situ point determinations of a time-averaged target compound mass flux and water flux. The sampler is typically installed in a monitoring well where it intercepts the groundwater flow and captures the compounds of interest. The sampler consists of permeable cartridges which are each packed with a specific sorbent matrix. The sorbent matrix of the water flux cartridge is impregnated with known amounts of water soluble resident tracers. These tracers are leached from the matrix at rates proportional to the groundwater flux. The measurements of the contaminants and the remaining resident tracer are used to determine groundwater and target compound fluxes. Exposure times range from 1 week to 6 months, depending on the expected concentration and groundwater flow velocity. The iFLUX sampler technology has been validated and tested at several field projects. Currently, 4 cartridges are tested and available: 1 waterflux cartridge to
A flux footprint analysis to understand ecosystem fluxes in an intensively managed landscape
Hernandez Rodriguez, L. C.; Goodwell, A. E.; Kumar, P.
2017-12-01
Flux tower studies in agricultural sites have mainly been done at plot scale, where the footprint of the instruments is small such that the data reveals the behaviour of the nearby crop on which the study is focused. In the Midwestern United States, the agricultural ecosystem and its associated drainage, evapotranspiration, and nutrient dynamics are dominant influences on interactions between the soil, land, and atmosphere. In this study, we address large-scale ecohydrologic fluxes and states in an intensively managed landscape based on data from a 25m high eddy covariance flux tower. We show the calculated upwind distance and flux footprint for a flux tower located in Central Illinois as part of the Intensively Managed Landscapes Critical Zone Observatory (IMLCZO). In addition, we calculate the daily energy balance during the summer of 2016 from the flux tower measurements and compare with the modelled energy balance from a representative corn crop located in the flux tower footprint using the Multi-Layer Canopy model, MLCan. The changes in flux footprint over the course of hours, days, and the growing season have significant implications for the measured fluxes of carbon and energy at the flux tower. We use MLCan to simulate these fluxes under land covers of corn and soybeans. Our results demonstrate how the instrument heights impact the footprint of the captured eddy covariance fluxes, and we explore the implication for hydrological analysis. The convective turbulent atmosphere during the daytime shows a wide footprint of more than 10 km2, which reaches 3km length for the 90% contribution, where buoyancy is the dominant mechanism driving turbulence. In contrast, the stable atmosphere during the night-time shows a narrower footprint that goes beyond 8km2 and grows in the direction of the prevalent wind, which exceeds 4 km in length. This study improves our understanding of agricultural ecosystem behaviour in terms of the magnitude and variability of fluxes and
Optimization of 13C isotopic tracers for metabolic flux analysis in mammalian cells.
Walther, Jason L; Metallo, Christian M; Zhang, Jie; Stephanopoulos, Gregory
2012-03-01
Mammalian cells consume and metabolize various substrates from their surroundings for energy generation and biomass synthesis. Glucose and glutamine, in particular, are the primary carbon sources for proliferating cancer cells. While this combination of substrates generates static labeling patterns for use in (13)C metabolic flux analysis (MFA), the inability of single tracers to effectively label all pathways poses an obstacle for comprehensive flux determination within a given experiment. To address this issue we applied a genetic algorithm to optimize mixtures of (13)C-labeled glucose and glutamine for use in MFA. We identified tracer combinations that minimized confidence intervals in an experimentally determined flux network describing central carbon metabolism in tumor cells. Additional simulations were used to determine the robustness of the [1,2-(13)C(2)]glucose/[U-(13)C(5)]glutamine tracer combination with respect to perturbations in the network. Finally, we experimentally validated the improved performance of this tracer set relative to glucose tracers alone in a cancer cell line. This versatile method allows researchers to determine the optimal tracer combination to use for a specific metabolic network, and our findings applied to cancer cells significantly enhance the ability of MFA experiments to precisely quantify fluxes in higher organisms. Copyright © 2011 Elsevier Inc. All rights reserved.
Gao, Nuo; Zhu, S. A.; He, Bin
2005-06-01
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 ± 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 ± 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
DFBAlab: a fast and reliable MATLAB code for dynamic flux balance analysis.
Gomez, Jose A; Höffner, Kai; Barton, Paul I
2014-12-18
Dynamic Flux Balance Analysis (DFBA) is a dynamic simulation framework for biochemical processes. DFBA can be performed using different approaches such as static optimization (SOA), dynamic optimization (DOA), and direct approaches (DA). Few existing simulators address the theoretical and practical challenges of nonunique exchange fluxes or infeasible linear programs (LPs). Both are common sources of failure and inefficiencies for these simulators. DFBAlab, a MATLAB-based simulator that uses the LP feasibility problem to obtain an extended system and lexicographic optimization to yield unique exchange fluxes, is presented. DFBAlab is able to simulate complex dynamic cultures with multiple species rapidly and reliably, including differential-algebraic equation (DAE) systems. In addition, DFBAlab's running time scales linearly with the number of species models. Three examples are presented where the performance of COBRA, DyMMM and DFBAlab are compared. Lexicographic optimization is used to determine unique exchange fluxes which are necessary for a well-defined dynamic system. DFBAlab does not fail during numerical integration due to infeasible LPs. The extended system obtained through the LP feasibility problem in DFBAlab provides a penalty function that can be used in optimization algorithms.
Prediction of Greenhouse Gas (GHG) Fluxes from Coastal Salt Marshes using Artificial Neural Network
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2017-12-01
Coastal salt marshes are among the most productive ecosystems on earth. Given the complex interactions between ambient environment and ecosystem biological exchanges, it is difficult to predict the salt marsh greenhouse gas (GHG) fluxes (CO2 and CH4) from their environmental drivers. In this study, we developed an artificial neural network (ANN) model to robustly predict the salt marsh GHG fluxes using a limited number of input variables (photosynthetically active radiation, soil temperature and porewater salinity). The ANN parameterization involved an optimized 3-layer feed forward Levenberg-Marquardt training algorithm. Four tidal salt marshes of Waquoit Bay, MA — incorporating a gradient in land-use, salinity and hydrology — were considered as the case study sites. The wetlands were dominated by native Spartina Alterniflora, and characterized by high salinity and frequent flooding. The developed ANN model showed a good performance (training R2 = 0.87 - 0.96; testing R2 = 0.84 - 0.88) in predicting the fluxes across the case study sites. The model can be used to estimate wetland GHG fluxes and potential carbon balance under different IPCC climate change and sea level rise scenarios. The model can also aid the development of GHG offset protocols to set monitoring guidelines for restoration of coastal salt marshes.
An improved model for sensible heat flux estimation based on landcover classification
Zhou, Ti; Xin, Xiaozhou; Jiao, Jingjun; Peng, Zhiqing
2014-10-01
Remote sensing (RS) has been recognized as the most feasible means to provide spatially distributed regional evapotranspiration (ET). However, classical RS flux algorithms (SEBS, S-SEBI, SEBAL, etc.) can hardly be used with coarser resolution RS data from sensors like MODIS or AVHRR for no consideration of surface heterogeneity in mixed pixels even they are suitable for assessing the surface fluxes with high resolution RS data.A new model named FAFH is developed in this study to enhance the accuracy of flux estimation in mixed pixels based on high resolution landcover classification data. The area fraction and relative sensible heat fraction of each heterogeneous land use type calculated within coarse resolution pixels are calculated firstly, and then used for the weighted average of modified sensible heat. The study is carried out in the core agricultural land of Zhangye, the middle reaches of Heihe river based on the flux and landcover classification product of HJ-1B in our earlier work. The result indicates that FAFH increases the accuracy of sensible heat by 5% absolutely, 10.64% relatively in the whole research area.
Algebraic Algorithm Design and Local Search
National Research Council Canada - National Science Library
Graham, Robert
1996-01-01
.... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...
Golden Sine Algorithm: A Novel Math-Inspired Algorithm
Directory of Open Access Journals (Sweden)
TANYILDIZI, E.
2017-05-01
Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.
13CFLUX2--high-performance software suite for (13)C-metabolic flux analysis.
Weitzel, Michael; Nöh, Katharina; Dalman, Tolga; Niedenführ, Sebastian; Stute, Birgit; Wiechert, Wolfgang
2013-01-01
(13)C-based metabolic flux analysis ((13)C-MFA) is the state-of-the-art method to quantitatively determine in vivo metabolic reaction rates in microorganisms. 13CFLUX2 contains all tools for composing flexible computational (13)C-MFA workflows to design and evaluate carbon labeling experiments. A specially developed XML language, FluxML, highly efficient data structures and simulation algorithms achieve a maximum of performance and effectiveness. Support of multicore CPUs, as well as compute clusters, enables scalable investigations. 13CFLUX2 outperforms existing tools in terms of universality, flexibility and built-in features. Therewith, 13CFLUX2 paves the way for next-generation high-resolution (13)C-MFA applications on the large scale. 13CFLUX2 is implemented in C++ (ISO/IEC 14882 standard) with Java and Python add-ons to run under Linux/Unix. A demo version and binaries are available at www.13cflux.net.
Direct determination of the solar neutrino fluxes from solar neutrino data
González-Garciá, M C; Salvado, Jordi
2009-01-01
We determine the solar neutrino fluxes from a global analysis of the solar and terrestrial neutrino data in the framework of three-neutrino oscillations. Using a Bayesian approach we reconstruct the posterior probability distribution function for the eight normalization parameters of the solar neutrino fluxes plus the relevant oscillation parameters with and without imposing the luminosity constraint. This is done by means of a Markov Chain Monte Carlo employing the Metropolis-Hastings algorithm. We also describe how these results can be applied to test the predictions of the Standard Solar Models. Our results show that, at present, both models with low and high metallicity can describe the data with good statistical global agreement.
Fast neutron flux analyzer with real-time digital pulse shape discrimination
Ivanova, A. A.; Zubarev, P. V.; Ivanenko, S. V.; Khilchenko, A. D.; Kotelnikov, A. I.; Polosatkin, S. V.; Puryga, E. A.; Shvyrev, V. G.; Sulyaev, Yu. S.
2016-08-01
Investigation of subthermonuclear plasma confinement and heating in magnetic fusion devices such as GOL-3 and GDT at the Budker Institute (Novosibirsk, Russia) requires sophisticated equipment for neutron-, gamma- diagnostics and upgrading data acquisition systems with online data processing. Measurement of fast neutron flux with stilbene scintillation detectors raised the problem of discrimination of the neutrons (n) from background cosmic particles (muons) and neutron-induced gamma rays (γ). This paper describes a fast neutron flux analyzer with real-time digital pulse-shape discrimination (DPSD) algorithm FPGA-implemented for the GOL-3 and GDT devices. This analyzer was tested and calibrated with the help of 137Cs and 252Cf radiation sources. The Figures of Merit (FOM) calculated for different energy cuts are presented.
Apparatus for measuring low thermal fluxes
International Nuclear Information System (INIS)
Aranovitch, R.; Warnery, M.
1972-01-01
Device for the measurement of slight wall heat fluxes, made up of a metallic contact plate combined with a shaft; temperature measurement elements are spaced along the shaft which is kept at a cold adjustable reference temperature lower than that of the walls; heat insulation is provided for the exposed part of the plate and for the shaft [fr
Terrestrial water fluxes dominated by transpiration: Comment
Daniel R. Schlaepfer; Brent E. Ewers; Bryan N. Shuman; David G. Williams; John M. Frank; William J. Massman; William K. Lauenroth
2014-01-01
The fraction of evapotranspiration (ET) attributed to plant transpiration (T) is an important source of uncertainty in terrestrial water fluxes and land surface modeling (Lawrence et al. 2007, Miralles et al. 2011). Jasechko et al. (2013) used stable oxygen and hydrogen isotope ratios from 73 large lakes to investigate the relative roles of evaporation (E) and T in ET...
EUV mirror based absolute incident flux detector
Berger, Kurt W.
2004-03-23
A device for the in-situ monitoring of EUV radiation flux includes an integrated reflective multilayer stack. This device operates on the principle that a finite amount of in-band EUV radiation is transmitted through the entire multilayer stack. This device offers improvements over existing vacuum photo-detector devices since its calibration does not change with surface contamination.
Solitary wave propagation in solar flux tubes
International Nuclear Information System (INIS)
Erdelyi, Robert; Fedun, Viktor
2006-01-01
The aim of the present work is to investigate the excitation, time-dependent dynamic evolution, and interaction of nonlinear propagating (i.e., solitary) waves on vertical cylindrical magnetic flux tubes in compressible solar atmospheric plasma. The axisymmetric flux tube has a field strength of 1000 G at its footpoint, which is typical for photospheric regions. Nonlinear waves that develop into solitary waves are excited by a footpoint driver. The propagation of the nonlinear signal is investigated by solving numerically a set of fully nonlinear 2.0D magnetohydrodynamic (MHD) equations in cylindrical coordinates. For the initial conditions, axisymmetric solutions of the linear dispersion relation for wave modes in a magnetic flux tube are applied. In the present case, we focus on the sausage mode only. The dispersion relation is solved numerically for a range of plasma parameters. The equilibrium state is perturbed by a Gaussian at the flux tube footpoint. Two solitary solutions are found by solving the full nonlinear MHD equations. First, the nonlinear wave propagation with external sound speed is investigated. Next, the solitary wave propagating close to the tube speed, also found in the numerical solution, is studied. In contrast to previous analytical and numerical works, here no approximations were made to find the solitary solutions. A natural application of the present study may be spicule formation in the low chromosphere. Future possible improvements in modeling and the relevance of the photospheric chromospheric transition region coupling by spicules is suggested
Annual Cycles of Surface Shortwave Radiative Fluxes
Wilber, Anne C.; Smith, G. Louis; Gupta, Shashi K.; Stackhouse, Paul W.
2006-01-01
The annual cycles of surface shortwave flux are investigated using the 8-yr dataset of the surface radiation budget (SRB) components for the period July 1983-June 1991. These components include the downward, upward, and net shortwave radiant fluxes at the earth's surface. The seasonal cycles are quantified in terms of principal components that describe the temporal variations and empirical orthogonal functions (EOFs) that describe the spatial patterns. The major part of the variation is simply due to the variation of the insolation at the top of the atmosphere, especially for the first term, which describes 92.4% of the variance for the downward shortwave flux. However, for the second term, which describes 4.1% of the variance, the effect of clouds is quite important and the effect of clouds dominates the third term, which describes 2.4% of the variance. To a large degree the second and third terms are due to the response of clouds to the annual cycle of solar forcing. For net shortwave flux at the surface, similar variances are described by each term. The regional values of the EOFs are related to climate classes, thereby defining the range of annual cycles of shortwave radiation for each climate class.
Predicting flux decline of reverse osmosis membranes
Schippers, J.C.; Hanemaayer, J.H.; Smolders, C.A.; Kostense, A.
1981-01-01
A mathematical model predicting flux decline of reverse osmosis membranes due to colloidal fouling has been verified. This mathema- tical model is based on the theory of cake or gel filtration and the Modified Fouling Index (MFI). Research was conducted using artificial colloidal solutions and a
Self-powered neutron flux detector assembly
International Nuclear Information System (INIS)
Allan, C.J.; McIntyre, I.L.
1980-01-01
A self-powered neutron flux detector has both the central emitter electrode and its surrounding collector electrode made of inconel 600. The lead cables may also be made of inconel. Other nickel alloys, or iron, nickel, titamium, chromium, zirconium or their alloys may also be used for the electrodes
Radiation linewidth of flux-flow oscillators
DEFF Research Database (Denmark)
Koshelets, V.P.; Dmitriev, P.N.; Ermakov, A.B.
2001-01-01
(applied magnetic field) are taken. A profile of the FFO radiation line is measured in different regimes of FFO operation and compared to the theoretical models. A Lorentzian shape of the FFO line is observed both at Fiske steps (FSs) in the resonant regime and on the flux-flow step (FFS) at high voltages...
SLC positron source flux concentrator modulator
International Nuclear Information System (INIS)
de Lamare, J.; Kulikov, A.; Cassel, R.; Nesterov, V.
1991-06-01
The modulator for the SLC e+ source flux concentrator provides 16 kA in a 5 μs sinusoidal half wave current for a pure inductive load, at 120 Hz. The modulator incorporates 10 EEV CX1622 thyratrons in a switching network. It provides reliable operation with acceptable thyratron lifetime. 3 refs., 3 figs., 1 tab
AVERAGE FLUXES FROM HETEROGENEOUS VEGETATED REGIONS
KLAASSEN, W
Using a surface-layer model, fluxes of heat and momentum have been calculated for flat regions with regularly spaced step changes in surface roughness and stomatal resistance. The distance between successive step changes is limited to 10 km in order to fill the gap between micro-meteorological
Models of Flux Tubes from Constrained Relaxation
Indian Academy of Sciences (India)
tribpo
J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...
Optical magnetic flux generation in superconductor
Indian Academy of Sciences (India)
Abstract. The generation of the magnetic flux quanta inside the superconductors is studied as a new effect to destroy superconductivity using femtosecond (fs) laser. The vortices are successfully generated in the YBa2Cu3O7−δ thin film striplines by the fs laser. It is revealed that the vortex distribution in the strip reflects the fs ...
Physicochemical Flux and Phytoplankton diversity in Shagari ...
African Journals Online (AJOL)
USER
2007-03-20
Mar 20, 2007 ... Physicochemical Flux and Phytoplankton diversity in Shagari Reservoir,Sokoto, Nigeria. *1I.M. Magami, 1T. Adamu and 2A.A. Aliero. 1Zoology Unit, Department of Biological Sciences, Usmanu Danfodiyo University, Sokoto, Nigeria. 2Botany Unit, Department of Biological Sciences, Usmanu Danfodiyo ...
Examining gas flux responses to restoration
Wetlands play an important role in the flux of gases such as carbon dioxide, methane, and nitrous oxide. Wetland ecosystems are characterized by slow decomposition and, often, high productivity, making them net sinks of carbon dioxide. However, under some conditions, such as ti...
Modelling radiocesium fluxes in forest ecosystems
International Nuclear Information System (INIS)
Shaw, G.; Kliashtorin, A.; Mamikhin, S.; Shcheglov, A.; Rafferty, B.; Dvornik, A.; Zhuchenko, T.; Kuchma, N.
1996-01-01
Monitoring of radiocesium inventories and fluxes has been carried out in forest ecosystems in Ukraine, Belarus and Ireland to determine distributions and rates of migration. This information has been used to construct and calibrate mathematical models which are being used to predict the likely longevity of contamination of forests and forest products such as timber following the Chernobyl accident
Demystifying Electric Flux and Gauss's Law
McManus, Jeff
2017-01-01
Many physics students have experienced the difficulty of internalizing concepts in electrostatics. After studying concrete, measurable details in mechanics, they are challenged by abstract ideas such as electric fields, flux, Gauss's law, and electric potential. There are a few well-known hands-on activities that help students get experience with…
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
What controls sediment flux in dryland channels?
Michaelides, K.; Singer, M. B.
2010-12-01
Theories for the development of longitudinal and grain size profiles in perennial fluvial systems are well developed, allowing for generalization of sediment flux and sorting in these fluvial systems over decadal to millennial time scales under different forcings (e.g., sediment supply, climate changes, etc). However, such theoretical frameworks are inadequate for understanding sediment flux in dryland channels subject to spatially and temporally discontinuous streamflow, where transport capacity is usually much lower than sediment supply. In such fluvial systems, channel beds are poorly sorted with weak vertical layering, poorly defined bar forms, minimal downstream fining, and straight longitudinal profiles. Previous work in dryland channels has documented sediment flux at higher rates than their humid counterparts once significant channel flow develops, pulsations in bed material transport under constant discharge, and oscillations in dryland channel width that govern longitudinal patterns in erosion and deposition. These factors point to less well appreciated controls on sediment flux in dryland valley floors that invite further study. This paper investigates the relative roles of hydrology, bed material grain size, and channel width on sediment flux rates in the Rambla de Nogalte in southeastern Spain. Topographic valley cross sections and hillslope and channel particle sizes were collected from an ephemeral-river reach. Longitudinal grain-size variation on the hillslopes and on the channel bed were analysed in order to determine the relationship between hillslope supply characteristics and channel grain-size distribution and longitudinal changes. Local fractional estimates of bed-material transport in the channel were calculated using a range of channel discharge scenarios in order to examine the effect of channel hydrology on sediment transport. Numerical modelling was conducted to investigate runoff connectivity from hillslopes to channel and to examine the