Conference on Large Scale Optimization
Hearn, D; Pardalos, P
1994-01-01
On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...
Large Scale Correlation Clustering Optimization
Bagon, Shai
2011-01-01
Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...
Optimization of Large-Scale Structural Systems
DEFF Research Database (Denmark)
Jensen, F. M.
solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...
Optimizing Large-Scale ODE Simulations
Mulansky, Mario
2014-01-01
We present a strategy to speed up Runge-Kutta-based ODE simulations of large systems with nearest-neighbor coupling. We identify the cache/memory bandwidth as the crucial performance bottleneck. To reduce the required bandwidth, we introduce a granularity in the simulation and identify the optimal cluster size in a performance study. This leads to a considerable performance increase and transforms the algorithm from bandwidth bound to CPU bound. By additionally employing SIMD instructions we are able to boost the efficiency even further. In the end, a total performance increase of up to a factor three is observed when using cache optimization and SIMD instructions compared to a standard implementation. All simulation codes are written in C++ and made publicly available. By using the modern C++ libraries Boost.odeint and Boost.SIMD, these optimizations can be implemented with minimal programming effort.
Optimal Dispatching of Large-scale Water Supply System
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.
Topology Optimization of Large Scale Stokes Flow Problems
DEFF Research Database (Denmark)
Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan
2008-01-01
This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....
Optimal management of large scale aquifers under uncertainty
Ghorbanidehno, H.; Kokkinaki, A.; Kitanidis, P. K.; Darve, E. F.
2016-12-01
Water resources systems, and especially groundwater reservoirs, are a valuable resource that is often being endangered by contamination and over-exploitation. Optimal control techniques can be applied for groundwater management to ensure the long-term sustainability of this vulnerable resource. Linear Quadratic Gaussian (LQG) control is an optimal control method that combines a Kalman filter for real time estimation with a linear quadratic regulator for dynamic optimization. The LQG controller can be used to determine the optimal controls (e.g. pumping schedule) upon receiving feedback about the system from incomplete noisy measurements. However, applying LQG control for systems of large dimension is computationally expensive. This work presents the Spectral Linear Quadratic Gaussian (SpecLQG) control, a new fast LQG controller that can be used for large scale problems. SpecLQG control combines the Spectral Kalman filter, which is a fast Kalman filter algorithm, with an efficient low rank LQR, and provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification and optimal control for linear and weakly non-linear systems. The computational cost of SpecLQG controller scales linearly with the number of unknowns, a great improvement compared to the quadratic cost of basic LQG. We demonstrate the accuracy and computational efficiency of SpecLQG control using two applications: first, a linear validation case for pumping schedule management in a small homogeneous confined aquifer; and second, a larger scale nonlinear case with unknown heterogeneities in aquifer properties and boundary conditions.
BILGO: Bilateral greedy optimization for large scale semidefinite programming
Hao, Zhifeng
2013-10-03
Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.
Practical Optimal Control of Large-scale Water Distribution Network
Institute of Scientific and Technical Information of China (English)
Lv Mou(吕谋); Song Shuang
2004-01-01
According to the network characteristics and actual state of the water supply system in China, the implicit model, which can be solved by the hierarchical optimization method, was established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software has been developed successfully. The application of this model to the city of Hangzhou (China) was compared to experiential strategy. The results of this study showed that the developed model is a promising optimization method to control the large-scale water supply systems.
Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Willcox, Karen [MIT; Marzouk, Youssef [MIT
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to
Optimal Multilevel Control for Large Scale Interconnected Systems
Directory of Open Access Journals (Sweden)
Ahmed M. A. Alomar,
2014-04-01
Full Text Available A mathematical model of the finishing mill as an example of a large scale interconnected dynamical system is represented. First the system response due to disturbance only is presented. Then,the control technique applied to the finishing hot rolling steel mill is the optimal multilevel control using state feedback. An optimal controller is developed based on the integrated system model, but due to the complexity of the controllers and tremendous computational efforts involved, a multilevel technique is used in designing and implementing the controllers .The basis of the multilevel technique is described and a computational algorithm is discussed for the control of the finishing mill system . To reduce the mass storage , memory requirements and the computational time of the processor, a sub-optimal multilevel technique is applied to design the controllers of the finishing mill . Comparison between these controllers and conclusion is presented.
Geospatial Optimization of Siting Large-Scale Solar Projects
Energy Technology Data Exchange (ETDEWEB)
Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
Near optimal bispectrum estimators for large-scale structure
Schmittfull, Marcel; Seljak, Uroš
2014-01-01
Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias $b_1$ and the amplitude of fluctuations $\\sigma_8$. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms $b_2$ and $b_{s^2}$, respectively, while the shift term has none in the absence of velocity bias (valid in the $k \\rightarrow 0$ limit). Thus the linear bias $b_1$ is best determined by the shift cross-spec...
Near optimal bispectrum estimators for large-scale structure
Schmittfull, Marcel; Baldauf, Tobias; Seljak, Uroš
2015-02-01
Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias b1 and the amplitude of fluctuations σ8. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms b2 and bs2 , respectively, while the shift term has none in the absence of velocity bias (valid in the k →0 limit). Thus the linear bias b1 is best determined by the shift cross-spectrum, while the squared density and tidal cross-spectra mostly tighten constraints on b2 and bs2 once b1 is known. Since the form of the cross-spectra is derived from optimal maximum-likelihood estimation, they contain the full bispectrum information on bias parameters. Perturbative analytical predictions for their expectation values and covariances agree with simulations on large scales, k ≲0.09 h /Mpc at z =0.55 with Gaussian R =20 h-1 Mpc smoothing, for matter-matter-matter, and matter-matter-halo combinations. For halo-halo-halo cross-spectra the model also needs to include corrections to the Poisson stochasticity.
Optimal Wind Energy Integration in Large-Scale Electric Grids
Albaijat, Mohammad H.
The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create
Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization.
Yang, Qiang; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Deng, Jeremiah D; Li, Yun; Zhang, Jun
2016-10-24
Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified.
Optimization of Survivability Analysis for Large-Scale Engineering Networks
Poroseva, S V
2012-01-01
Engineering networks fall into the category of large-scale networks with heterogeneous nodes such as sources and sinks. The survivability analysis of such networks requires the analysis of the connectivity of the network components for every possible combination of faults to determine a network response to each combination of faults. From the computational complexity point of view, the problem belongs to the class of exponential time problems at least. Partially, the problem complexity can be reduced by mapping the initial topology of a complex large-scale network with multiple sources and multiple sinks onto a set of smaller sub-topologies with multiple sources and a single sink connected to the network of sources by a single link. In this paper, the mapping procedure is applied to the Florida power grid.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
SOLVING TRUST REGION PROBLEM IN LARGE SCALE OPTIMIZATION
Institute of Scientific and Technical Information of China (English)
Bing-sheng He
2000-01-01
This paper presents a new method for solving the basic problem in the “model trust region” approach to large scale minimization: Compute a vector x such that 1/2xTHx + cTx = min, subject to the constraint ‖x‖2≤a. The method is a combination of the CG method and a projection and contraction (PC) method. The first (CG) method with x0 = 0 as the start point either directly offers a solution of the problem, or--as soon as the norm of the iterate greater than a, --it gives a suitable starting point and a favourable choice of a crucial scaling parameter in the second (PC) method. Some numerical examples are given, which indicate that the method is applicable.
Optimization of large scale food production using Lean Manufacturing principles
DEFF Research Database (Denmark)
Engelund, Eva Høy; Friis, Alan; Breum, Gitte
2009-01-01
This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results...... show that Lean can be applied and used to manage the production of meals in the kitchen....
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second......This thesis investigates new optimization methods for structural topology optimization problems. The aim of topology optimization is finding the optimal design of a structure. The physical problem is modelled as a nonlinear optimization problem. This powerful tool was initially developed...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Large-Scale PDE-Constrained Optimization in Applications
Hazra, Subhendu Bikash
2010-01-01
Dealing with the simulation based optimization problems, this title presents the systematic development of the methods and algorithms. It covers the time dependent optimization problems with applications in environmental engineering, and also deals with steady state optimization problems, in which the PDEs are solved
Large-scale Optimization of Contoured Beam Reflectors and Reflectarrays
DEFF Research Database (Denmark)
Borries, Oscar; Sørensen, Stig B.; Jørgensen, Erik;
2016-01-01
Designing a contoured beam reflector or performing a direct optimization of a reflectarray requires a mathematical optimization procedure to determine the optimum design of the antenna. A popular approach, used in the market-leading TICRA software POS, can result in computation times on the order...
Improved Large-Scale Process Cooling Operation through Energy Optimization
Directory of Open Access Journals (Sweden)
Kriti Kapoor
2013-11-01
Full Text Available This paper presents a study based on real plant data collected from chiller plants at the University of Texas at Austin. It highlights the advantages of operating the cooling processes based on an optimal strategy. A multi-component model is developed for the entire cooling process network. The model is used to formulate and solve a multi-period optimal chiller loading problem, posed as a mixed-integer nonlinear programming (MINLP problem. The results showed that an average energy savings of 8.57% could be achieved using optimal chiller loading as compared to the historical energy consumption data from the plant. The scope of the optimization problem was expanded by including a chilled water thermal storage in the cooling system. The effect of optimal thermal energy storage operation on the net electric power consumption by the cooling system was studied. The results include a hypothetical scenario where the campus purchases electricity at wholesale market prices and an optimal hour-by-hour operating strategy is computed to use the thermal energy storage tank.
Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries
DEFF Research Database (Denmark)
Prunescu, Remus Mihail
with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...... processes acting as inhibitors in enzymatic hydrolysis and fermentation. Therefore, the biorefinery is treated in an integrated manner capturing the trade-offs between the conversion steps. Sensitivity and uncertainty analysis is also performed in order to identify the modeling bottlenecks and which...
Optimization of nanofountain probe microfabrication enables large-scale nanopatterning
Safi, Asmahan; Kang, Wonmo; Czapleski, David; Divan, Ralu; Moldovan, Nicolae; Espinosa, Horacio D.
2013-12-01
A technological gap in nanomanufacturing has prevented the translation of many nanomaterial discoveries into real-world commercialized products. Bridging this gap requires a paradigm shift in methods for fabricating nanoscale devices in a reliable and repeatable fashion. Here we present the optimized fabrication of a robust and scalable nanoscale delivery platform, the nanofountain probe (NFP), for parallel direct-write of functional materials. Microfabrication of a new generation of NFP was realized with the aim of increasing the uniformity of the device structure. Optimized probe geometry was integrated into the design and fabrication process by modifying the precursor mask dimensions and by using an isotropic selective dry etching of the outer shell that defines the protrusion area. Probes with well-conserved sharp tips and controlled protrusion lengths were obtained. Sealing effectiveness of the channels was optimized. A conformal tetraethyl orthosilicate based oxide layer increased the sealing efficacy while minimizing the required thickness. A compensation scheme based on the residual stresses in each layer was implemented to minimize bending of the cantilever after releasing the device. The device was tested by patterning ferritin catalyst arrays on silicon dioxide with sub-100 nm resolution. The optimized probes increased the control over the parallel patterning resolution which enables manufacturing of ordered arrays of nanomaterials.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms. This project will integrate concepts from these two technologies to develop...generic optimization frameworks to find provably good solutions to large-scale discrete optimization problems often encountered in many real applications...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Optimal Experimental Design for Large-Scale Bayesian Inverse Problems
Ghattas, Omar
2014-01-06
We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.
On the Order Optimality of Large-scale Underwater Networks
Shin, Won-Yong; Medard, Muriel; Stojanovic, Milica; Tarokh, Vahid
2011-01-01
Capacity scaling laws are analyzed in an underwater acoustic network with $n$ regularly located nodes on a square, in which both bandwidth and received signal power can be limited significantly. A narrow-band model is assumed where the carrier frequency is allowed to scale as a function of $n$. In the network, we characterize an attenuation parameter that depends on the frequency scaling as well as the transmission distance. Cut-set upper bounds on the throughput scaling are then derived in both extended and dense networks having unit node density and unit area, respectively. It is first analyzed that under extended networks, the upper bound is inversely proportional to the attenuation parameter, thus resulting in a highly power-limited network. Interestingly, it is seen that the upper bound for extended networks is intrinsically related to the attenuation parameter but not the spreading factor. On the other hand, in dense networks, we show that there exists either a bandwidth or power limitation, or both, ac...
Optimization of large-scale fabrication of dielectric elastomer transducers
DEFF Research Database (Denmark)
Hassouneh, Suzan Sager
to the corrugations, the films were able to adhere in different configurations (back-to-back, front-to-back and front-to-front). The first approach involved adhering PDMS to PDMS (back-to-back), for which two routes were followed. The first route involved using an aminosilane as an adhesion agent after modifying...... and as received CNTs and modified CNTs were investigated. The unmodified CNTs were mixed with an ionic liquid, and two dispersion methods were investigated. The first method involved the ultrasonication of CNT/IL, which showed that conductivity increased in line with increasing CNT at concentrations lower than 5...... grafted covalently to the CNT surface with poly(methacryloyl polydimethylsiloxane), resulting in the obtained conductivities being comparable to commercially available Elastosil LR3162, even at low functionalisation. The optimized methods allow new processes for the production of DE film with corrugations...
Three dimensional large scale aerodynamic shape optimization based on shape calculus
Schmidt, Stephan; Gauger, Nicolas,; Ilic, Caslav; Schulz, Volker
2011-01-01
Large-scale three-dimensional aerodynamic shape optimization based on the compressible Euler equations is considered. Shape calculus is used to derive an exact surface formulation of the gradients, enabling the computation of shape gradient information for each surface mesh node without having to calculate further mesh sensitivities. Special attention is paid to the applicability to large-scale three dimensional problems like the optimization of an Onera M6 wing or a complete blended-wing–bod...
Hybrid constraint programming and metaheuristic methods for large scale optimization problems
2011-01-01
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatori...
Modified Augmented Lagrange Multiplier Methods for Large-Scale Chemical Process Optimization
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Chemical process optimization can be described as large-scale nonlinear constrained minimization. The modified augmented Lagrange multiplier methods (MALMM) for large-scale nonlinear constrained minimization are studied in this paper. The Lagrange function contains the penalty terms on equality and inequality constraints and the methods can be applied to solve a series of bound constrained sub-problems instead of a series of unconstrained sub-problems. The steps of the methods are examined in full detail. Numerical experiments are made for a variety of problems, from small to very large-scale, which show the stability and effectiveness of the methods in large-scale problems.
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The performance of analytical derivative and sparse matrix techniques applied to a traditional densesequential quadratic programming(SQP) is studied, and the strategy utilizing those techniques is also presented. Computational results on two typicalchemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy ispromising and suitable for large-scale process optimization problems.
Energy Technology Data Exchange (ETDEWEB)
Gotzig, B. [Laboratoire d`Electrotechnique de Grenoble (France)]|[Schneider Electric S.A., Grenoble (France); Hadjsaid, N.; Feuillet, R. [Laboratoire d`Electrotechnique de Grenoble (France); Jeannot, R. [Schneider Electric S.A., Grenoble (France)
1998-12-31
Optimization of large scale distribution systems on a real time base requires computationally efficient algorithms. In this paper a fast general branch exchange algorithm is proposed. Depending on the objective function which is optimized, both the line loss reduction in the normal state and the restoration of de-energized loads can be carried out. Tests were carried out on a real large scale distribution network. They demonstrate that the method is fast and that it can be used in distribution management systems on real time base. (author)
Ge, Hongwei; Sun, Liang; Tan, Guozhen; Chen, Zheng; Chen, C L Philip
2017-09-01
Large scale optimization problems arise in diverse fields. Decomposing the large scale problem into small scale subproblems regarding the variable interactions and optimizing them cooperatively are critical steps in an optimization algorithm. To explore the variable interactions and perform the problem decomposition tasks, we develop a two stage variable interaction reconstruction algorithm. A learning model is proposed to explore part of the variable interactions as prior knowledge. A marginalized denoising model is proposed to construct the overall variable interactions using the prior knowledge, with which the problem is decomposed into small scale modules. To optimize the subproblems and relieve premature convergence, we propose a cooperative hierarchical particle swarm optimization framework, where the operators of contingency leadership, interactional cognition, and self-directed exploitation are designed. Finally, we conduct theoretical analysis for further understanding of the proposed algorithm. The analysis shows that the proposed algorithm can guarantee converging to the global optimal solutions if the problems are correctly decomposed. Experiments are conducted on the CEC2008 and CEC2010 benchmarks. The results demonstrate the effectiveness, convergence, and usefulness of the proposed algorithm.
Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants
DEFF Research Database (Denmark)
Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest
2017-01-01
with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...
Optimization of large-scale heterogeneous system-of-systems models.
Energy Technology Data Exchange (ETDEWEB)
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Efficient Interpretation of Large-Scale Real Data by Static Inverse Optimization
Zhang, Hong; Ishikawa, Masumi
We have already proposed a methodology for static inverse optimization to interpret real data from a viewpoint of optimization. In this paper we propose a method for efficiently generating constraints by divide-and-conquer to interpret large-scale data by static inverse optimization. It radically decreases computational cost of generating constraints by deleting non-Pareto optimal data from given data. To evaluate the effectiveness of the proposed method, simulation experiments using 3-D artifical data are carried out. As an application to real data, criterion functions underlying decision making of about 5, 000 tenants living along Yamanote line and Soubu-Chuo line in Tokyo are estimated, providing interpretation of rented housing data from a viewpoint of optimization.
A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System
Directory of Open Access Journals (Sweden)
Aipeng Jiang
2014-01-01
Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.
Stochastic Optimization of Large Scale Multi-Reservoir Systems subject to environmental flow demands
Fernandes Marques, Guilherme; Tilmant, Amaury
2014-05-01
Among the environmental impacts caused by dams, the alteration of flow regimes is one of the most critical to river ecosystems given its influence in long river reaches and its continuous pattern. While the reoperation of reservoir systems to recover some of the natural flow regime is expected to mitigate the impacts, associated costs and losses will be imposed on different power plants depending on flows, power plant and reservoir characteristics, system's topology and other aspects. In a large scale reservoir system this economic impact is not trivial, and it should be properly evaluated to identify coordinated operating solutions that avoid penalizing a single reservoir. This paper combines an efficient stochastic dual dynamic programming method for reservoir optimization subject to environmental flow targets with specific magnitude and return period, which effects on fish recruitment are already known. This allows the evaluation of the economic and power generation impacts in a large scale hydropower system when subject to environmental flow demands. The present paper contributes with methods and results that are useful in (a) quantifying the foregone hydropower and revenues resulting from meeting a specific environmental flow demand, (b) identifying the distribution and reallocation of the foregone hydropower and revenue across a large scale system, and (c) identifying optimal reservoir operating strategies to meet environmental flow demands in a large scale multi-reservoir system.
Institute of Scientific and Technical Information of China (English)
Hu Xiaoping; Xiao Yougang; Wang Guangbin
2006-01-01
Combined with the second rotary kiln of Alumina Factory in Great Wall Aluminum Company, the mechanics characteristics of statically indeterminate large-scale rotary kiln with variable cross-sections is analyzed. In order to adjusting the runing axis of rotary kiln, taking the force equilibrium of the rollers and the minimum of relative axis deflection as the optimization goal, the multi-objective optimization model of mechanical running conditions of kiln rotary is set up. The mechanical running conditions of the second rotary kiln after multi-objective optimization adjustment are compared with those before adjustment and after routine adjustment. It shows that multi-objective optimization adjustment can make axis as direct as possible and can distribute kiln loads equally.
Directory of Open Access Journals (Sweden)
Jian Wang
2014-01-01
Full Text Available A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality.
Optimal Capacity Allocation of Large-Scale Wind-PV-Battery Units
Kehe Wu; Huan Zhou; Jizhen Liu
2014-01-01
An optimal capacity allocation of large-scale wind-photovoltaic- (PV-) battery units was proposed. First, an output power model was established according to meteorological conditions. Then, a wind-PV-battery unit was connected to the power grid as a power-generation unit with a rated capacity under a fixed coordinated operation strategy. Second, the utilization rate of renewable energy sources and maximum wind-PV complementation was considered and the objective function of full life cycle-net...
Hasegawa, Mikio; Tran, Ha Nguyen; Miyamoto, Goh; Murata, Yoshitoshi; Harada, Hiroshi; Kato, Shuzo
We propose a neurodynamical approach to a large-scale optimization problem in Cognitive Wireless Clouds, in which a huge number of mobile terminals with multiple different air interfaces autonomously utilize the most appropriate infrastructure wireless networks, by sensing available wireless networks, selecting the most appropriate one, and reconfiguring themselves with seamless handover to the target networks. To deal with such a cognitive radio network, game theory has been applied in order to analyze the stability of the dynamical systems consisting of the mobile terminals' distributed behaviors, but it is not a tool for globally optimizing the state of the network. As a natural optimization dynamical system model suitable for large-scale complex systems, we introduce the neural network dynamics which converges to an optimal state since its property is to continually decrease its energy function. In this paper, we apply such neurodynamics to the optimization problem of radio access technology selection. We compose a neural network that solves the problem, and we show that it is possible to improve total average throughput simply by using distributed and autonomous neuron updates on the terminal side.
Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms
Hasanov, Khalid
2014-03-04
© 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.
Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management
Tsao, J.; Li, J.; Chou, C.; Tung, C.
2009-12-01
Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.
Optimally amplified large-scale streaks and drag reduction in turbulent pipe flow.
Willis, Ashley P; Hwang, Yongyun; Cossu, Carlo
2010-09-01
The optimal amplifications of small coherent perturbations within turbulent pipe flow are computed for Reynolds numbers up to one million. Three standard frameworks are considered: the optimal growth of an initial condition, the response to harmonic forcing and the Karhunen-Loève (proper orthogonal decomposition) analysis of the response to stochastic forcing. Similar to analyses of the turbulent plane channel flow and boundary layer, it is found that streaks elongated in the streamwise direction can be greatly amplified from quasistreamwise vortices, despite linear stability of the mean flow profile. The most responsive perturbations are streamwise uniform and, for sufficiently large Reynolds number, the most responsive azimuthal mode is of wave number m=1 . The response of this mode increases with the Reynolds number. A secondary peak, where m corresponds to azimuthal wavelengths λ_{θ}^{+}≈70-90 in wall units, also exists in the amplification of initial conditions and in premultiplied response curves for the forced problems. Direct numerical simulations at Re=5300 confirm that the forcing of m=1,2 and m=4 optimal structures results in the large response of coherent large-scale streaks. For moderate amplitudes of the forcing, low-speed streaks become narrower and more energetic, whereas high-speed streaks become more spread. It is further shown that drag reduction can be achieved by forcing steady large-scale structures, as anticipated from earlier investigations. Here the energy balance is calculated. At Re=5300 it is shown that, due to the small power required by the forcing of optimal structures, a net power saving of the order of 10% can be achieved following this approach, which could be relevant for practical applications.
Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES
Directory of Open Access Journals (Sweden)
Zhongguang Fu
2015-08-01
Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.
A modular approach to large-scale design optimization of aerospace systems
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
Directory of Open Access Journals (Sweden)
Rashida Adeeb Khanum
2016-02-01
Full Text Available JADE is an adaptive scheme of nature inspired algorithm, Differential Evolution (DE. It performed considerably improved on a set of well-studied benchmark test problems. In this paper, we evaluate the performance of new JADE with two external archives to deal with unconstrained continuous large-scale global optimization problems labeled as Reflected Adaptive Differential Evolution with Two External Archives (RJADE/TA. The only archive of JADE stores failed solutions. In contrast, the proposed second archive stores superior solutions at regular intervals of the optimization process to avoid premature convergence towards local optima. The superior solutions which are sent to the archive are reflected by new potential solutions. At the end of the search process, the best solution is selected from the second archive and the current population. The performance of RJADE/TA algorithm is then extensively evaluated on two test beds. At first on 28 latest benchmark functions constructed for the 2013 Congress on Evolutionary Computation special session. Secondly on ten benchmark problems from CEC2010 Special Session and Competition on Large-Scale Global Optimization. Experimental results demonstrated a very competitive perfor-mance of the algorithm.
Merkx, E P J; Ten Kate, O M; van der Kolk, E
2017-06-12
The phenomenon of self-absorption is by far the largest influential factor in the efficiency of luminescent solar concentrators (LSCs), but also the most challenging one to capture computationally. In this work we present a model using a multiple-generation light transport (MGLT) approach to quantify light transport through single-layer luminescent solar concentrators of arbitrary shape and size. We demonstrate that MGLT offers a significant speed increase over Monte Carlo (raytracing) when optimizing the luminophore concentration in large LSCs and more insight into light transport processes. Our results show that optimizing luminophore concentration in a lab-scale device does not yield an optimal optical efficiency after scaling up to realistically sized windows. Each differently sized LSC therefore has to be optimized individually to obtain maximal efficiency. We show that, for strongly self-absorbing LSCs with a high quantum yield, parasitic self-absorption can turn into a positive effect at very high absorption coefficients. This is due to a combination of increased light trapping and stronger absorption of the incoming sunlight. We conclude that, except for scattering losses, MGLT can compute all aspects in light transport through an LSC accurately and can be used as a design tool for building-integrated photovoltaic elements. This design tool is therefore used to calculate many building-integrated LSC power conversion efficiencies.
The Substitution Secant/Finite Difference Method for Large Scale Sparse Unconstrained Optimization
Institute of Scientific and Technical Information of China (English)
Hong-wei Zhang; Jun-xiang Li
2005-01-01
This paper studies a substitution secant/finite difference (SSFD) method for solving large scale sparse unconstrained optimization problems. This method is a combination of a secant method and a finite difference method, which depends on a consistent partition of the columns of the lower triangular part of the Hessian matrix. A q-superlinear convergence result and an r-convergence rate estimate show that this method has good local convergence properties. The numerical results show that this method may be competitive with some currently used algorithms.
Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent
Xu, Wei
2011-01-01
For large scale learning problems, it is desirable if we can obtain the optimal model parameters by going through the data in only one pass. Polyak and Juditsky (1992) showed that asymptotically the test performance of the simple average of the parameters obtained by stochastic gradient descent (SGD) is as good as that of the parameters which minimize the empirical cost. However, to our knowledge, despite its optimal asymptotic convergence rate, averaged SGD (ASGD) received little attention in recent research on large scale learning. One possible reason is that it may take a prohibitively large number of training samples for ASGD to reach its asymptotic region for most real problems. In this paper, we present a finite sample analysis for the method of Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a huge number of samples for ASGD to reach its asymptotic region for improperly chosen learning rate. More importantly, based on our analysis, we propose a simple way to properly set lea...
Design and Optimization of Fast Switching Valves for Large Scale Digital Hydraulic Motors
DEFF Research Database (Denmark)
Roemer, Daniel Beck
of seat valves suitable for large scale digital hydraulic motors and detailed analysis methods for the pressure chambers of such machines. In addition, modeling methods of seat valves within this field have been developed, and a design method utilizing these models including optimization of subdomains has......The present thesis is on the design, analysis and optimization of fast switching valves for digital hydraulic motors with high power ratings. The need for such high power motors origins in the potential use of hydrostatic transmissions in wind turbine drive trains, as digital hydraulic machines...... have been shown to improve the overall efficiency and efficient operation range compared to traditional hydraulic machines. Digital hydraulic motors uses electronically controlled independent seat valves connected to the pressure chambers, which must be fast acting and exhibit low pressure losses...
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [The University of Texas at Austin
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.
On large-scale nonlinear programming techniques for solving optimal control problems
Energy Technology Data Exchange (ETDEWEB)
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
Directory of Open Access Journals (Sweden)
B. Y. Qu
2017-01-01
Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.
Optimal Capacity Allocation of Large-Scale Wind-PV-Battery Units
Directory of Open Access Journals (Sweden)
Kehe Wu
2014-01-01
Full Text Available An optimal capacity allocation of large-scale wind-photovoltaic- (PV- battery units was proposed. First, an output power model was established according to meteorological conditions. Then, a wind-PV-battery unit was connected to the power grid as a power-generation unit with a rated capacity under a fixed coordinated operation strategy. Second, the utilization rate of renewable energy sources and maximum wind-PV complementation was considered and the objective function of full life cycle-net present cost (NPC was calculated through hybrid iteration/adaptive hybrid genetic algorithm (HIAGA. The optimal capacity ratio among wind generator, PV array, and battery device also was calculated simultaneously. A simulation was conducted based on the wind-PV-battery unit in Zhangbei, China. Results showed that a wind-PV-battery unit could effectively minimize the NPC of power-generation units under a stable grid-connected operation. Finally, the sensitivity analysis of the wind-PV-battery unit demonstrated that the optimization result was closely related to potential wind-solar resources and government support. Regions with rich wind resources and a reasonable government energy policy could improve the economic efficiency of their power-generation units.
Tavakoli, Ruhollah
2010-01-01
The structure of many real-world optimization problems includes minimization of a nonlinear (or quadratic) functional subject to bound and singly linear constraints (in the form of either equality or bilateral inequality) which are commonly called as continuous knapsack problems. Since there are efficient methods to solve large-scale bound constrained nonlinear programs, it is desirable to adapt these methods to solve knapsack problems, while preserving their efficiency and convergence theories. The goal of this paper is to introduce a general framework to extend a box-constrained optimization solver to solve knapsack problems. This framework includes two main ingredients which are O(n) methods; in terms of the computational cost and required memory; for the projection onto the knapsack constrains and the null-space manipulation of the related linear constraint. The main focus of this work is on the extension of Hager-Zhang active set algorithm (SIAM J. Optim. 2006, pp. 526--557). The main reasons for this ch...
Directory of Open Access Journals (Sweden)
Mr. Yogesh Rai
2011-09-01
Full Text Available Many methods have been researched to prolong sensor network lifetime using mobile technologies. In the mobile sink research, there are the track based methods and the anchor points based methods as representative operation methods for mobile sinks. However, the existing methods decrease Quality of Service (QoS and lead the routing hotspot in the vicinity of the mobile sink. In large scale wireless sensor networks, clustering is an effective technique for the purpose of improving the utilization of limited energy and prolonging the network lifetime. However, the problem of unbalanced energy dissipation exists in the multi-hop clustering model, where the cluster heads closer to the sink have to relay heavier traffic and consume more energy than farther nodes. In this paper we analyze several aspects based on the optimal clustering architecture for maximizing lifetime for large scale wireless sensor network. We also provide some analytical concepts for energy-aware head rotation and routing protocols to further balance the energy consumption among all nodes.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2016-06-20
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to
MULTICRITERIА PROBLEM OF FINDING THE OPTIMAL PATHS FOR LARGE-SCALE TRANSPORT SYSTEM
Directory of Open Access Journals (Sweden)
Pavlov D. A.
2015-11-01
Full Text Available This article explores the multicriteria problems arise in the organization of routes in large-scale transport management system. As a mathematical tool for constructing a model, we were using the prefractal graphs. Prefractal graphs naturally reflect structure of the device of communications of transport system, reflecting its important features – locality and differentiation. Locality is provided with creation of internal routes (city, raionwide, etc.. Differentiation is understood as division of routes on intra regional, interregional and international. The objective is reduced to a covering of prefractal graphs by the simple paths which are crossed on edges and nodes. On the set of feasible solutions, vector criterion function with certain criteria is based. In concepts of transport system, the given criteria have concrete substantial interpretation, the transport routes allowing to design considering features of system. In this article, we construct polynomial algorithms for finding optimal according to certain criteria decision. By the criteria which aren't optimizing the allocated routes their estimates of the lower and upper bounds are given. On all given algorithms the estimates of computing complexity confirming advantage of use of methods of prefractal and fractal graphs before classical methods of the theory of graphs are constructed and proved
Panja, Debabrata
2007-01-01
We present a new statistical method to optimally link local weather extremes to large-scale atmospheric circulation structures. The method is illustrated using July-August daily mean temperature at 2m height (T2m) time-series over the Netherlands and 500 hPa geopotential height (Z500) time-series over the Euroatlantic region of the ECMWF reanalysis dataset (ERA40). The method identifies patterns in the Z500 time-series that optimally describe, in a precise mathematical sense, the relationship with local warm extremes in the Netherlands. Two patterns are identified; the most important one corresponds to a blocking high pressure system leading to subsidence and calm, dry and sunny conditions over the Netherlands. The second one corresponds to a rare, easterly flow regime bringing warm, dry air into the region. The patterns are robust; they are also identified in shorter subsamples of the total dataset. The method is generally applicable and might prove useful in evaluating the performance of climate models in s...
SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale
Energy Technology Data Exchange (ETDEWEB)
Meng, Jintao; Seo, Sangmin; Balaji, Pavan; Wei, Yanjie; Wang, Bingqiang; Feng, Shengzhong
2016-08-16
In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. In k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...
DEFF Research Database (Denmark)
Seyyed Sakha, Masoud; Shaker, Hamid Reza
2017-01-01
expensive. The computational burden is significant in particular for large-scale systems. In this paper, we develop a new technique for placing sensor and actuator in large-scale systems by using Restricted Genetic Algorithm (RGA). The RGA is a kind of genetic algorithm which is developed specifically...
Chaonong Xu; Chi Zhang; Yongjun Xu; Zhiguang Wang
2015-01-01
The idea of network protocol design based on optimization theory has been proposed and used practically in Internet for about 15 years. However, for large-scale wireless ad hoc network, although protocol could be viewed as a recursive solving of a global optimization problem, protocol design is still facing huge challenge because an effective distributed algorithm for solving global optimization problem is still lacking. We solve the problem by putting forward a systematic design method based...
Design optimization studies for large-scale contoured beam deployable satellite antennas
Tanaka, Hiroaki
2006-05-01
Satellite communications systems over the past two decades have become more sophisticated and evolved new applications that require much higher flux densities. These new requirements to provide high data rate services to very small user terminals have in turn led to the need for large aperture space antenna systems with higher gain. Conventional parabolic reflectors constructed of metal have become, over time, too massive to support these new missions in a cost effective manner and also have posed problems of fitting within the constrained volume of launch vehicles. Designers of new space antenna systems have thus begun to explore new design options. These design options for advanced space communications networks include such alternatives as inflatable antennas using polyimide materials, antennas constructed of piezo-electric materials, phased array antenna systems (especially in the EHF bands) and deployable antenna systems constructed of wire mesh or cabling systems. This article updates studies being conducted in Japan of such deployable space antenna systems [H. Tanaka, M.C. Natori, Shape control of space antennas consisting of cable networks, Acta Astronautica 55 (2004) 519-527]. In particular, this study shows how the design of such large-scale deployable antenna systems can be optimized based on various factors including the frequency bands to be employed with such innovative reflector design. In particular, this study investigates how contoured beam space antennas can be effective by constructed out of so-called cable networks or mesh-like reflectors. This design can be accomplished via "plane wave synthesis" and by the "force density method" and then to iterate the design to achieve the optimum solution. We have concluded that the best design is achieved by plane wave synthesis. Further, we demonstrate that the nodes on the reflector are best determined by a pseudo-inverse calculation of the matrix that can be interpolated so as to achieve the minimum
Reduction of Large-scale Turbulence and Optimization of Flows in the Madison Dynamo Experiment
Taylor, N. Z.
2011-10-01
The Madison Dynamo Experiment seeks to observe a magnetic field grow at the expense of kinetic energy in a flow of liquid sodium. The enormous Reynolds numbers of the experiment and its two vortex geometry creates strong turbulence, which in turn leads to transport of magnetic flux consistent with an increase of the effective resistivity. The increased effective resistivity implies that faster flows are required for the dynamo to operate. Three major results from the experiment will be reported in this talk. 1) A new probe technique has been developed for measuring both the fluctuating velocity and magnetic fields which has allowed a direct measurement of the turbulent EMF from . 2) The scale of the largest eddies in the experiment has been reduced by an equatorial baffle on the vessel boundary. This modification of the flow at the boundary results in strong field generation and amplification by the mean velocity of the flow, and the role of turbulence in generating currents is reduced. The motor power required to drive a given flow speed is reduced by 20%, the effective Rm, as measured by the toroidal windup of the field(omega effect), increased by a factor of ~2.4, and the turbulent EMF (previously measured to be as large as the induction by the mean flow) is eliminated. These results all indicate that the equatorial baffle has eliminated the largest-scale eddies in the flow. 3) Flow optimization is now possible by adjusting the pitch of vanes installed on the vessel wall. An analysis of the kinematic prediction for dynamo excitation reveals that the threshold for excitation is quite sensitive to the helical pitch of the flow. Computational fluid dynamics simulations of the flow showed that by adjusting the angle of the vanes on the vessel wall (which control the helical pitch of the flow) we should be able to minimize the critical velocity at which the dynamo onset occurs. Experiments are now underway to exploit this new capability in tailoring the large-scale
Institute of Scientific and Technical Information of China (English)
Lihui CEN; Yugeng XI
2008-01-01
By considering the flow control of urban sewer networks to minimize the electricity consumption of pumping stations.a decomposition-coordination strategy for energy savings based on network community division is developed in this paper. A mathematical model characterizing the smady-state flow of urball sewer networks is first constructed,consisting of a set of algebraic equations with the structure transportation capacities captured as constraints.Since the sewer networks have no apparent natural hierarchical structure in general.it is very difficult to identify the clustered groups.A fast network division approach through calculating the betweenness of each edge is successfully applied to identify the groups and a sewer network with arbitrary configuration could be then decomposed into subnetworks.By integrating the coupling constraints of the subnetworks.the original problem is separated into N optimization subproblems in accordance with the network decomposition.Each subproblem is solved locally and the solutions to the subproblems are coordinated to form an appropriate global solution.Finally,an application to a specified large-scale sewer network is also investigated to demonstrate the validity of the proposed algorithm.
Spider Optimization: Probing the Systematics of a Large Scale B-Mode Experiment
MacTavish, C J; Battistelli, E S; Benton, S; Bihary, R; Bock, J J; Bond, J R; Brevik, J; Bryan, S; Contaldi, C R; Crill, B P; Doré, O; Fissel, L; Golwala, S R; Halpern, M; Hilton, G; Holmes, W; Hristov, V V; Irwin, K; Jones, W C; Kuo, C L; Lange, A E; Lawrie, C; Martin, T G; Mason, P; Montroy, T E; Netterfield, C B; Riley, D; Ruhl, J E; Trangsrud, A; Tucker, C; Turner, A; Viero, M; Wiebe, D
2007-01-01
Spider is a long-duration, balloon-borne polarimeter designed to measure large scale Cosmic Microwave Background (CMB) polarization with very high sensitivity and control of systematics. The instrument will map over half the sky with degree angular resolution in I, Q and U Stokes parameters, in four frequency bands from 96 to 275 GHz. Spider's ultimate goal is to detect the primordial gravity wave signal imprinted on the CMB B-mode polarization. One of the challenges in achieving this goal is the minimization of the contamination of B-modes by systematic effects. This paper explores a number of instrument systematics and observing strategies in order to optimize B-mode sensitivity. This is done by injecting realistic-amplitude, time-varying systematics in a set of simulated time-streams. Tests of the impact of detector noise characteristics, pointing jitter, payload pendulations, polarization angle offsets, beam systematics and receiver gain drifts are shown. Spider's default observing strategy is to spin con...
Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES)
Zhongguang Fu; Ke Lu; Yiming Zhu
2015-01-01
As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES) power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled ...
Langhans, Simone D; Hermoso, Virgilio; Linke, Simon; Bunn, Stuart E; Possingham, Hugh P
2014-01-01
River rehabilitation aims to protect biodiversity or restore key ecosystem services but the success rate is often low. This is seldom because of insufficient funding for rehabilitation works but because trade-offs between costs and ecological benefits of management actions are rarely incorporated in the planning, and because monitoring is often inadequate for managers to learn by doing. In this study, we demonstrate a new approach to plan cost-effective river rehabilitation at large scales. The framework is based on the use of cost functions (relationship between costs of rehabilitation and the expected ecological benefit) to optimize the spatial allocation of rehabilitation actions needed to achieve given rehabilitation goals (in our case established by the Swiss water act). To demonstrate the approach with a simple example, we link costs of the three types of management actions that are most commonly used in Switzerland (culvert removal, widening of one riverside buffer and widening of both riversides) to the improvement in riparian zone quality. We then use Marxan, a widely applied conservation planning software, to identify priority areas to implement these rehabilitation measures in two neighbouring Swiss cantons (Aargau, AG and Zürich, ZH). The best rehabilitation plans identified for the two cantons met all the targets (i.e. restoring different types of morphological deficits with different actions) rehabilitating 80,786 m (AG) and 106,036 m (ZH) of the river network at a total cost of 106.1 Million CHF (AG) and 129.3 Million CH (ZH). The best rehabilitation plan for the canton of AG consisted of more and better connected sub-catchments that were generally less expensive, compared to its neighbouring canton. The framework developed in this study can be used to inform river managers how and where best to spend their rehabilitation budget for a given set of actions, ensures the cost-effective achievement of desired rehabilitation outcomes, and helps
METHOD BASED ON DUAL-QUADRATIC PROGRAMMING FOR FRAME STRUCTURAL OPTIMIZATION WITH LARGE SCALE
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The optimality criteria (OC) method and mathematical programming (MP)were combined to found the sectional optimization model of frame structures. Different methods were adopted to deal with the different constraints. The stress constraints as local constraints were approached by zero-order approximation and transformed into movable sectional lower limits with the full stress criterion. The displacement constraints as global constraints were transformed into explicit expressions with the unit virtual load method. Thus an approximate explicit model for the sectional optimization of frame structures was built with stress and displacement constraints. To improve the resolution efficiency, the dual-quadratic programming was adopted to transform the original optimization model into a dual problem according to the dual theory and solved iteratively in its dual space. A method called approximate scaling step was adopted to reduce computations and smooth the iterative process. Negative constraints were deleted to reduce the size of the optimization model. With MSC/Nastran software as structural solver and MSC/Patran software as developing platform, the sectional optimization software of frame structures was accomplished, considering stress and displacement constraints. The examples show that the efficiency and accuracy are improved.
Strategic optimization of large-scale vertical closed-loop shallow geothermal systems
Hecht-Méndez, J.; de Paly, M.; Beck, M.; Blum, P.; Bayer, P.
2012-04-01
Vertical closed-loop geothermal systems or ground source heat pump (GSHP) systems with multiple vertical borehole heat exchangers (BHEs) are attractive technologies that provide heating and cooling to large facilities such as hotels, schools, big office buildings or district heating systems. Currently, the worldwide number of installed systems shows a recurrent increase. By running arrays of multiple BHEs, the energy demand of a given facility is fulfilled by exchanging heat with the ground. Due to practical and technical reasons, square arrays of the BHEs are commonly used and the total energy extraction from the subsurface is accomplished by an equal operation of each BHE. Moreover, standard designing practices disregard the presence of groundwater flow. We present a simulation-optimization approach that is able to regulate the individual operation of multiple BHEs, depending on the given hydro-geothermal conditions. The developed approach optimizes the overall performance of the geothermal system while mitigating the environmental impact. As an example, a synthetic case with a geothermal system using 25 BHEs for supplying a seasonal heating energy demand is defined. The optimization approach is evaluated for finding optimal energy extractions for 15 scenarios with different specific constant groundwater flow velocities. Ground temperature development is simulated using the optimal energy extractions and contrasted against standard application. It is demonstrated that optimized systems always level the ground temperature distribution and generate smaller subsurface temperature changes than non-optimized ones. Mean underground temperature changes within the studied BHE field are between 13% and 24% smaller when the optimized system is used. By applying the optimized energy extraction patterns, the temperature of the heat carrier fluid in the BHE, which controls the overall performance of the system, can also be raised by more than 1 °C.
DEFF Research Database (Denmark)
Li, Rui; Roberti, Roberto
2017-01-01
This paper addresses the railway track possession scheduling problem (RTPSP), where a large-scale railway infrastructure project consisting of multiple construction works is to be planned. The RTPSP is to determine when to perform the construction works and in which track possessions while satisf...
DEFF Research Database (Denmark)
Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun
2015-01-01
This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic...... optimal siting and sizing of storage units throughoutthe network. These questions are investigated using an IEEE benchmark system......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...
Energy Technology Data Exchange (ETDEWEB)
Stengel, D N; Luenberger, D G; Larson, R E; Cline, T B
1979-02-01
A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.
Optimization of a Large-scale Microseismic Monitoring Network in Northern Switzerland
Kraft, T.; Husen, S.; Mignan, A.; Bethmann, F.
2011-12-01
We have performed a computer aided network optimization for a regional scale microseismic network in northeastern Switzerland. The goal of the optimization was to find the geometry and size of the network that assures a location precision of 0.5 km in the epicenter and 2.0 km in focal depth for earthquakes of magnitude ML>= 1.0, by taking into account 67 existing stations in Switzerland, Germany and Austria, and the expected detectability of Ml 1 earthquakes in the study area. The optimization was based on the simulated annealing approach by Hardt and Scherbaum (1993), that aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm: to calculate traveltimes of seismic body waves using a finite differences raytracer and the three-dimensional velocity model of Switzerland, to calculate seismic body waves amplitudes at arbitrary stations assuming Brune source model and using scaling relations recently derived for Switzerland, and to estimate the noise level at arbitrary locations within Switzerland using a first order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. Considering 67 existing stations in Switzerland, Germany and Austria, optimizations for networks of 10 to 35 new stations were calculated with respect to 2240 synthetic earthquakes of magnitudes between ML=0.8-1.1. We incorporated the case of non-detections by considering only earthquake-station pairs with an expected signal-to-noise ratio larger than 10 for the considered body wave. Station noise levels were derived from measured ground motion for existing stations and from the first order ambient noise model for new sites. The stability of the optimization result was tested by repeated optimization runs with changing initial conditions. Due to the highly non linear nature and size of the problem, station locations in the individual solutions show small
Chen, Hanbo; Liu, Tao; Zhao, Yu; Zhang, Tuo; Li, Yujie; Li, Meng; Zhang, Hongmiao; Kuang, Hui; Guo, Lei; Tsien, Joe Z; Liu, Tianming
2015-07-15
Tractography based on diffusion tensor imaging (DTI) data has been used as a tool by a large number of recent studies to investigate structural connectome. Despite its great success in offering unique 3D neuroanatomy information, DTI is an indirect observation with limited resolution and accuracy and its reliability is still unclear. Thus, it is essential to answer this fundamental question: how reliable is DTI tractography in constructing large-scale connectome? To answer this question, we employed neuron tracing data of 1772 experiments on the mouse brain released by the Allen Mouse Brain Connectivity Atlas (AMCA) as the ground-truth to assess the performance of DTI tractography in inferring white matter fiber pathways and inter-regional connections. For the first time in the neuroimaging field, the performance of whole brain DTI tractography in constructing a large-scale connectome has been evaluated by comparison with tracing data. Our results suggested that only with the optimized tractography parameters and the appropriate scale of brain parcellation scheme, can DTI produce relatively reliable fiber pathways and a large-scale connectome. Meanwhile, a considerable amount of errors were also identified in optimized DTI tractography results, which we believe could be potentially alleviated by efforts in developing better DTI tractography approaches. In this scenario, our framework could serve as a reliable and quantitative test bed to identify errors in tractography results which will facilitate the development of such novel tractography algorithms and the selection of optimal parameters.
Cost optimizing of large-scale offshore wind farms. Summary and conclusion. Final report
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-07-01
The project comprises investigation of the technical and economical possibilities of large-scale offshore wind farms at 3 locations in the eastern Danish waters: Roedsand and Gedser Rev located south of the islands of Falster and Lolland and Omoe Staagrunde located south-west of the island of Zealand plus experiences obtained from British and German offshore wind energy projects. The project included wind and wave measurements at the above 3 locations, data collection, data processing, meteorological analysis, modelling of wind turbine structure, studies of grid connection, design and optimisation of foundations plus estimates of investments and operation and maintenance costs. All costs are in ECU on 1997 basis. The main conclusion of the project financed by the European Commission is: Areas are available for large scale offshore wind farms in the Danish waters; A large wind potential is found on the sites; Park layouts of projects consisting of around 100 wind turbines each has been developed; Design of the foundations has been optimised radically compared to previous designs; A large potential for optimising of the wind turbine design and operation has been found; Grid connection of the first proposed large wind farms is possible with only minor reinforcement of the transmission system; The visual impact is not prohibitive for the projects; A production cost of 4-5 ECUcent/kWh is competitive with current onshore projects. All in all, the results from this project have proven to be very useful for the further development of large-scale wind farms in the Danish waters, and thereby an inspiration for similar projects in other (European) countries. (LN)
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Energy Technology Data Exchange (ETDEWEB)
Friedman, A. [Minnesota Univ., Minneapolis, MN (United States). Inst. for Mathematics and Its Applications
1996-12-01
The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.
Cost optimizing of large-scale offshore wind farms. Appendix K to Q. Final report
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-07-01
This Volume 4 contains reports prepared by SEAS Distribution A.m.b.A., Risoe National Laboratory, Nellemann, Nielsen and Rauschenberger A/S (NNR), Universidad Politecnica de Madrid, National Wind Power Ltd. and Stadtwerke Rostock AG. Appendix K - Wind and wave measurements; Appendix L - Establishment of design basis (wind, wave and ice loads). Appendix M - Wake effects and wind farm modelling. Appendix N - Functional requirements and optimisation of wind turbines. Appendix O - Operation and maintenance system. Appendix O.1 - Helicopter Service (alternative). Appendix P - Cost optimising of large scale offshore wind farms in UK waters. Appendix Q - Cost optimising of large scale offshore wind farms in German waters. Appendix K, L and N have been prepared by Risoe National Laboratory. Appendix M has been prepared by Universidad Politecnica de Madrid. Appendix 0 has been prepared by SEAS Distribution A.m.b.A.. Appendix O.1 has been prepared by Nellemann, Nielsen and Rauschenberger A/S. Appendix P has been prepared by National Wind Power Ltd.. Appendix Q has been prepared by Stadtwerke Rostock AG. (au)
Cost optimizing of large-scale offshore wind arms in UK waters
Energy Technology Data Exchange (ETDEWEB)
Bean, D. [National wind Power Ltd. (United Kingdom)
1999-07-01
As part of the study `Cost Optimising of Large Scale Offshore Wind Farms`, National Wind Power`s objective is to broaden the scope of the study into the UK context. The suitability of an offshore wind farm development has been reviewed for a variety of regions around the UK, culminating in the selection of a reference site off the east coast of England. A design basis for this site has been derived, and a preliminary foundation design has been performed from within the consortium. Due primarily to the increased wave exposure at the UK reference site, the resulting gravity and monopile designs were larger and therefore more expensive than their Danish counterparts. A summary of the required consents for an offshore wind farm in UK waters is presented, together with an update on the recent consultation process initiated by UK Government on offshore wind energy. (au) EFP-96; JOULE-3. 22 refs.
Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.
Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan
2014-01-01
Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source.
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Zhao, Shi-Zheng; Suganthan, Ponnuthurai Nagaratnam; Das, Swagatam
In order to solve large scale continuous optimization problems, Self-adaptive DE (SaDE) is enhanced by incorporating the JADE mutation strategy and hybridized with modified multi-trajectory search (MMTS) algorithm (SaDE-MMTS). The JADE mutation strategy, the "DE/current-to-pbest" which is a variation of the classic "DE/current-to-best", is used for generating mutant vectors. After the mutation phase, the binomial (uniform) crossover, the exponential crossover as well as no crossover option are used to generate each pair of target and trial vectors. By utilizing the self-adaptation in SaDE, both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, suitable offspring generation strategy along with associated parameter settings will be determined adaptively to match different phases of the search process. MMTS is applied frequently to refine several diversely distributed solutions at different search stages satisfying both the global and the local search requirement. The initialization of step sizes is also defined by a self-adaption during every MMTS step. The success rates of both SaDE and the MMTS are determined and compared, consequently, future function evaluations for both search algorithms are assigned proportionally to their recent past performance. The proposed SaDE-MMTS is employed to solve the 20 numerical optimization problems for the CEC'2010 Special Session and Competition on Large Scale Global Optimization and competitive results are presented.
Optimization theory for large systems
Lasdon, Leon S
2011-01-01
Important text examines most significant algorithms for optimizing large systems and clarifying relations between optimization procedures. Much data appear as charts and graphs and will be highly valuable to readers in selecting a method and estimating computer time and cost in problem-solving. Initial chapter on linear and nonlinear programming presents all necessary background for subjects covered in rest of book. Second chapter illustrates how large-scale mathematical programs arise from real-world problems. Appendixes. List of Symbols.
Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.
2014-09-01
Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other soils with high organic carbon contents are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new data set comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip-well-specific long-term annual mean water level (WL) as well as a transformed form (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insight into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and
Directory of Open Access Journals (Sweden)
M. Bechtold
2014-04-01
Full Text Available Fluxes of the three main greenhouse gases (GHG CO2, CH4 and N2O from peat and other organic soils are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new dataset comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip well specific long-term annual mean water level (WL as well as a transformed form of it (WLt. The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insights into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and that
Large-Scale Multi-Objective Optimization for the Management of Seawater Intrusion, Santa Barbara, CA
Stanko, Z. P.; Nishikawa, T.; Paulinski, S. R.
2015-12-01
The City of Santa Barbara, located in coastal southern California, is concerned that excessive groundwater pumping will lead to chloride (Cl) contamination of its groundwater system from seawater intrusion (SWI). In addition, the city wishes to estimate the effect of continued pumping on the groundwater basin under a variety of initial and climatic conditions. A SEAWAT-based groundwater-flow and solute-transport model of the Santa Barbara groundwater basin was optimized to produce optimal pumping schedules assuming 5 different scenarios. Borg, a multi-objective genetic algorithm, was coupled with the SEAWAT model to identify optimal management strategies. The optimization problems were formulated as multi-objective so that the tradeoffs between maximizing pumping, minimizing SWI, and minimizing drawdowns can be examined by the city. Decisions can then be made on a pumping schedule in light of current preferences and climatic conditions. Borg was used to produce Pareto optimal results for all 5 scenarios, which vary in their initial conditions (high water levels, low water levels, or current basin state), simulated climate (normal or drought conditions), and problem formulation (objective equations and decision-variable aggregation). Results show mostly well-defined Pareto surfaces with a few singularities. Furthermore, the results identify the precise pumping schedule per well that was suitable given the desired restriction on drawdown and Cl concentrations. A system of decision-making is then possible based on various observations of the basin's hydrologic states and climatic trends without having to run any further optimizations. In addition, an assessment of selected Pareto-optimal solutions was analyzed with sensitivity information using the simulation model alone. A wide range of possible groundwater pumping scenarios is available and depends heavily on the future climate scenarios and the Pareto-optimal solution selected while managing the pumping wells.
DEFF Research Database (Denmark)
Hou, Peng; Hu, Weihao; Soltani, Mohsen;
2015-01-01
With the increasing size of wind farm, the impact of the wake effect on wind farm energy yields become more and more evident. The arrangement of the wind turbines’ (WT) locations will influence the capital investment and contribute to the wake losses which incur the reduction of energy production....... As a consequence, the optimized placement of the wind turbines may be done by considering the wake effect as well as the components cost within the wind farm. In this paper, a mathematical model which includes the variation of both wind direction and wake deficit is proposed. The problem is formulated by using...... to find the optimized layout, which minimizes the LPC. The optimization procedure is applicable for optimized placement of wind turbines within wind farms and extendible for different wind conditions and capacity of wind farms....
Energy Technology Data Exchange (ETDEWEB)
Mohamed, K.M.; Bettle, M.C.; Gerber, A.G.; Hall, J.W. [University of New Brunswick, Fredericton, NB (Canada). Dept. of Mechanical Engineering
2010-10-10
This study evaluates large-scale low-grade energy recovery (LS-LGER) from a conventional coal-fired Rankine cycle (RC) as a 'green' option to offsetting the cost of treating pollution. An energy and exergy analysis of a reference-generating station isolates the key areas for investigation into LS-LGER. This is followed by a second law analysis and a detailed optimization study for a revised RC configuration, which provides a conservative estimate of the possible energy recovery. Cycle optimization based on specific power output, and including compact heat exchanger designs, indicates plant efficiency improvements (with high-capacity equipment) of approximately 2 percentage points with reduced environmental impact.
Hamaus, Nico; Desjacques, Vincent
2011-01-01
One of the main signatures of primordial non-Gaussianity of the local type is a scale-dependent correction to the bias of large-scale structure tracers such as galaxies or clusters, whose amplitude depends on the bias of the tracers itself. The dominant source of noise in the power spectrum of the tracers is caused by sampling variance on large scales (where the non-Gaussian signal is strongest) and shot noise arising from their discrete nature. Recent work has argued that one can avoid sampling variance by comparing multiple tracers of different bias, and suppress shot noise by optimally weighting halos of different mass. Here we combine these ideas and investigate how well the signatures of non-Gaussian fluctuations in the primordial potential can be extracted from the two-point correlations of halos and dark matter. On the basis of large $N$-body simulations with local non-Gaussian initial conditions and their halo catalogs we perform a Fisher matrix analysis of the two-point statistics. Compared to the st...
Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems
2007-05-01
colorful with his stereo karaoke system. Anh Hai, thanks for helping me move my furnitures many times, and for all the beers too! To all Vietnamese...visit them. My trips to Springfield would have been very boring if Anh Tung (+ Thao) and Anh Danh (+ Thuy) had not turn on their super stereo karaoke ...expensive to solve, e.g. for applications such as optimal design or probabilistic analyses. Model order reduction is a powerful tool that permits the
Directory of Open Access Journals (Sweden)
Chao Kang
2014-01-01
Full Text Available Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes.
Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D.
2014-01-01
Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182
Large-scale multi-zone optimal power dispatch using hybrid hierarchical evolution technique
Directory of Open Access Journals (Sweden)
Manjaree Pandit
2014-03-01
Full Text Available A new hybrid technique based on hierarchical evolution is proposed for large, non-convex, multi-zone economic dispatch (MZED problems considering all practical constraints. Evolutionary/swarm intelligence-based optimisation techniques are reported to be effective only for small/medium-sized power systems. The proposed hybrid hierarchical evolution (HHE algorithm is specifically developed for solving large systems. The HHE integrates the exploration and exploitation capabilities of particle swarm optimisation and differential evolution in a novel manner such that the search efficiency is improved substantially. Most hybrid techniques export or exchange features or operations from one algorithm to the other, but in HHE their entire individual features are retained. The effectiveness of the proposed algorithm has been verified on six-test systems having different sizes and complexity levels. Non-convex MZED solution for such large and complex systems has not yet been reported.
fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.
Hung, Ling-Hong; Samudrala, Ram
2014-06-15
fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.
Energy Technology Data Exchange (ETDEWEB)
Garzillo, A.; Innorta, M.; Marannino, P.; Mognetti, F., Cova, B.
1988-09-01
This paper presents some criteria applied to the optimization of voltage profiles and reactive power generation distribution among various resources in daily scheduling and VAR planning. The mathematical models employed in the representation of the two problems are quite similar in spite of the different objective functions and control variable set. The solution is based upon the implementation of two optimal reactive power flow (ORPF) programs. The first ORPF determines a feasible operating point in daily scheduling application, or the minimum investment installations required by system security in VAR planning application. It utilizes a linear algorithm (gradient protection) suggested by Rosen which has been found to be a favourable alternative to the commonly suited simplex method. The second ORPF determines the minimum losses operating point, in the reactive power dispatch, or the most beneficial installation of reactive compensations in VAR planning. The solution of the economy problems is carried out by the Han-Powell algorithm. It essentially solves a set of quadratic sub-problems. In the adopted procedure, the quadratic sub-problems are solved by exploiting an active constraint strategy in the QUADRI subroutine used as an alternative to the well-known Beale method.
Energy Technology Data Exchange (ETDEWEB)
Kim, Do Yun, E-mail: dykim0129@kaist.ac.kr; NO, Hee Cheon, E-mail: hcno@kaist.ac.kr; Kim, Ho Sik, E-mail: hskim25@kaist.ac.kr
2015-11-15
Highlights: • Optimization methodology for fin geometry on the steel containment is established. • Optimum spacing is 7 cm in PASS containment. • Optimum thickness is 0.9–1.8 cm when a fin height is 10–25 cm. • Optimal fin geometry is determined in given fin height by overall effectiveness correlation. • 13% of material volume and 43% of containment volume are reduced by using fins. - Abstracts: Heat removal capability through a steel containment is important in accident situations to preserve the integrity of a nuclear power plant which adopts a steel containment concept. A heat transfer rate will be enhanced by using fins on the external surface of the steel containment. The fins, however, cause to increase flow resistance and to deteriorate the heat transfer rate at the same time. Therefore, this study investigates an optimization methodology of large scale fin geometry for a vertical base where a natural convection flow regime is turbulent. Rectangular plate fins adopted in the steel containment of a Public Acceptable Simple SMR (PASS) is used as a reference. The heat transfer rate through the fins is obtained from CFD tools. In order to optimize fin geometry, an overall effectiveness concept is introduced as a fin performance parameter. The optimizing procedure is starting from finding optimum spacing. Then, optimum thickness is calculated and finally optimal fin geometry is suggested. Scale analysis is conducted to show the existence of an optimum spacing which turns out to be 7 cm in case of PASS. Optimum thickness is obtained by the overall effectiveness correlation, which is derived from a total heat transfer coefficient correlation. The total heat transfer coefficient correlation of a vertical fin array is suggested considering both of natural convection and radiation. However, the optimum thickness is changed as a fin height varies. Therefore, optimal fin geometry is obtained as a function of a fin height. With the assumption that the heat
Optimization and spatial pattern of large-scale aquifer thermal energy storage
Sommer, W.T.; Valstar, J.; Leusbrock, I.; Grotenhuis, J.T.C.; Rijnaarts, H.H.M.
2015-01-01
Aquifer thermal energy storage (ATES) is a cost-effective technology that enables the reduction of energy use and CO2 emissions associated with the heating and cooling of buildings by storage and recovery of large quantities of thermal energy in the subsurface. Reducing the distance between wells in
Efficient large-scale graph data optimization for intelligent video surveillance
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Weighted modularity optimization for crisp and fuzzy community detection in large-scale networks
Cao, Jie; Bu, Zhan; Gao, Guangliang; Tao, Haicheng
2016-11-01
Community detection is a classic and very difficult task in the field of complex network analysis, principally for its applications in domains such as social or biological networks analysis. One of the most widely used technologies for community detection in networks is the maximization of the quality function known as modularity. However, existing work has proved that modularity maximization algorithms for community detection may fail to resolve communities in small size. Here we present a new community detection method, which is able to find crisp and fuzzy communities in undirected and unweighted networks by maximizing weighted modularity. The algorithm derives new edge weights using the cosine similarity in order to go around the resolution limit problem. Then a new local moving heuristic based on weighted modularity optimization is proposed to cluster the updated network. Finally, the set of potentially attractive clusters for each node is computed, to further uncover the crisply fuzzy partition of the network. We give demonstrative applications of the algorithm to a set of synthetic benchmark networks and six real-world networks and find that it outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and scalability.
Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models
Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.
2012-12-01
The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).
Tsvetkov, S V; Petreev, I V; Greben'kov, S V
2011-09-01
The article contains results of fulfilled studies that allowed finding main features of radiology safety, working out academic and research recommendations to perfect radiology safety in treatment-and-preventive institutions (TPI) and creating a method of calculation of authorized staffing needed by radiological safety services. It was established that the following actions are least fulfilled: radiation control, organization of radiation safety education, authorization for work with ionizing radiation both for military men and civil staff, maintenance of documentation. We suggest that promising direction of optimization of providing radiological safety in large-scale TPI is the following: allotment of special structure that will provide comprehensive fulfillment of regulatory documents demands, it may be, e. g. radiological safety service.
Quoc, Tran Dinh; Diehl, Moritz
2011-01-01
A new algorithm for solving large-scale separable convex optimization problems is proposed. The basic idea is to combine three techniques including Lagrangian dual decomposition, excessive gap and smoothing techniques. The main advantage of this algorithm is to dynamically update the smoothness parameters which leads to a numerically stable performance ability. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The worst-case complexity is estimated which is $O(1/k)$, where $k$ is the iteration counter. Then, the algorithm is coupled with a dual scheme to construct a switching variant of the dual decomposition. Discussion on the implementation issues is presented and theoretical comparison is analyzed. Numerical results are implemented to confirm the theoretical development.
Shibasaki, Ryuichi; Watanabe, Tomihiro; Ieda, Hitoshi
This paper develops a large-scale simulation model of international maritime container shipping industry considering optimal behaviors of both shippers and oceangoing carriers, in order to measure impact of port and international logistics policies for each country including Japan. Concretely, the authors develop a short-term model (income maximization model of carriers) including shippers' choice of carrier when maritime cargo shipping demand between ports are given and a mid-term model (Nash equilibrium model of shippers and carriers) including shippers' choice of import/export port and route of hinterland transport and carriers' profit maximization behavior when cargo shipping demand between regions are given. The developed model is applied to the actual large-scale international maritime container shipping network in Eastern Asia. From a trial calculation based on the actual cargo shipping demand, the performance of the model is validated in terms of convergency and reproducibility. Also, the sensitivity of the model output taking the actual port policies into account is confirmed.
Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.
2014-11-01
During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30
Directory of Open Access Journals (Sweden)
Tarek H. M. Abou-El-Enien
2015-04-01
Full Text Available This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution method for solving Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand side of the constraints (TL-LSLMOP-SPrhs of block angular structure. In order to obtain a compromise ( satisfactory solution to the (TL-LSLMOP-SPrhs of block angular structure using the proposed TOPSIS method, a modified formulas for the distance function from the positive ideal solution (PIS and the distance function from the negative ideal solution (NIS are proposed and modeled to include all the objective functions of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective programming problem is obtained by using the max-min operator for the second –order compromise operaion. A decomposition algorithm for generating a compromise ( satisfactory solution through TOPSIS approach is provided where the first level decision maker (FLDM is asked to specify the relative importance of the objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
Corbin, Charles D.
Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.
DEFF Research Database (Denmark)
Bache, Anja Margrethe
2010-01-01
WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...
Directory of Open Access Journals (Sweden)
V. Pedinotti
2014-04-01
Full Text Available During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT satellite mission will deliver maps of water surface elevation (WSE with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE. It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true
Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.
2014-04-01
During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning
Sizing and Siting of Large-Scale Batteries in Transmission Grids to Optimize the Use of Renewables
Fiorini, Laura; Pagani, Giuliano; Pelacchi, P.; Poli, Davide; Aiello, Marco
2017-01-01
Power systems are a recent field of application of Complex Network research, which allows to perform large scale studies and evaluations. Based on this theory, a power grid is modeled as a weighted graph with several kinds of nodes and edges, and further analysis can help in investigating the behavi
Uritsky, V. M.; Davila, J. M.; Jones, S. I.
2014-12-01
Solar Probe Plus and Solar Orbiter will provide detailed measurements in the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Interpretation of these measurements will require accurate reconstruction of the large-scale coronal magnetic field. In a related presentation by S. Jones et al., we argue that such reconstruction can be performed using photospheric extrapolation methods constrained by white-light coronagraph images. Here, we present the image-processing component of this project dealing with an automated segmentation of fan-like coronal loop structures. In contrast to the existing segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, we focus on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. The coronagraph images used for the loop segmentation are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction. The preprocessed images are subject to an adaptive second order differentiation combining radial and azimuthal directions. An adjustable thresholding technique is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to extract valid features and discard noisy data pixels. The obtained features are interpolated using higher-order polynomials which are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms.
DEFF Research Database (Denmark)
Lund, Henrik
2006-01-01
This article presents the results of analyses of large-scale integration of wind power, photo voltaic (PV) and wave power into a Danish reference energy system. The possibility of integrating Renewable Energy Sources (RES) into the electricity supply is expressed in terms of the ability to avoid...... the total input is above 80% of demand, PV should cover 20% and wave power 30%. Meanwhile the combination of different sources is alone far from a solution to large-scale integration of fluctuating resources. This measure is to be seen in combination with other measures such as investment in flexible energy...... excess electricity production. The different sources are analysed in the range of an electricity production from 0 to 100% of the electricity demand. The excess production is found from detailed energy system analyses on the computer model EnergyPLAN. The analyses have taken into account that certain...
DEFF Research Database (Denmark)
Heller, Alfred
2001-01-01
The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo
2015-01-01
A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses.
Directory of Open Access Journals (Sweden)
Mohamed Abdo Rizk
Full Text Available A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10% were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses.
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Large scale tracking algorithms.
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Energy Technology Data Exchange (ETDEWEB)
Ramamurthy, Byravamurthy [University of Nebraska-Lincoln
2014-05-05
In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.
Directory of Open Access Journals (Sweden)
Wei Tu
2015-10-01
Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.
Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.
2008-01-01
Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428
Bousserez, Nicolas
2016-01-01
This paper provides a detailed theoretical analysis of methods to approximate the solutions of high-dimensional (>10^6) linear Bayesian problems. An optimal low-rank projection that maximizes the information content of the Bayesian inversion is proposed and efficiently constructed using a scalable randomized SVD algorithm. Useful optimality results are established for the associated posterior error covariance matrix and posterior mean approximations, which are further investigated in a numerical experiment consisting of a large-scale atmospheric tracer transport source-inversion problem. This method proves to be a robust and efficient approach to dimension reduction, as well as a natural framework to analyze the information content of the inversion. Possible extensions of this approach to the non-linear framework in the context of operational numerical weather forecast data assimilation systems based on the incremental 4D-Var technique are also discussed, and a detailed implementation of a new Randomized Incr...
A hybrid-optimization method for large-scale non-negative full regualarization in image restoration
Guerrero, J.; Raydan, M.; Rojas, M.
2011-01-01
We describe a new hybrid-optimization method for solving the full-regularization problem of comput- ing both the regularization parameter and the corresponding regularized solution in 1-norm and 2-norm Tikhonov regularization with additional non-negativity constraints. The approach combines the simu
Large deviations and portfolio optimization
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Energy Technology Data Exchange (ETDEWEB)
Heuer, Volker; Loeser, Klaus [ALD Vacuum Technologies GmbH, Hanau (Germany)
2011-09-15
The energy optimization of thermoprocessing equipment is of great ecological and economical importance. Thermoprocessing equipment consumes up to 40 % of the energy used in industrial applications. Therefore it is necessary to increase the energy efficiency of thermoprocessing equipment in order to meet the EU's targets to reduce greenhouse gas emissions. To exploit the potential for energy savings, it is essential to analyze and optimize processes and plants as well as operating methods of electrically heated vacuum plants used in large scale production. The process can be improved by accelerated heating through the application of ''convective heating''. In addition higher process temperatures can be applied in diffusion-controlled thermochemical processes to accelerate the process significantly. Modular vacuum systems prove to be very energy-efficient because they adapt to the changing production requirements step-by-step. An optimized insulation structure reduces thermal losses considerably. Energy mangement systems installed in the plant-control optimally manage the energy used for start-up and shutdown of the plants while preventing energy peak loads. The use of new CFC-fixtures also contributes to reduce the energy demand. (orig.)
Gkoulalas-Divanis, Aris
2014-01-01
Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field
Institute of Scientific and Technical Information of China (English)
唐功友; 孙亮
2005-01-01
The optimal control problem for nonlinear interconnected large-scale dynamic systems is considered. A successive approximation approach for designing the optimal controller is proposed with respect to quadratic performance indexes. By using the approach, the high order, coupling,nonlinear two-point boundary value (TPBV) problem is transformed into a sequence of linear decoupling TPBV problems. It is proven that the TPBV problem sequence uniformly converges to the optimal control for nonlinear interconnected large-scale systems. A suboptimal control law is obtained by using a finite iterative result of the optimal control sequence.
Prabakaran, G; Hoti, S L
2008-05-01
Reduction of water activity in the formulations of mosquito biocontrol agent, Bacillus thuringiensis var. israelensis is very important for long term and successful storage. A protocol for spray drying of B. thuringiensis var. israelensis was developed through optimizing parameters such as inlet temperature and atomization type. A indigenous isolate of B. thuringiensis var. israelensis (VCRC B-17) was dried by freeze and spray drying methods and the moisture content and mosquito larvicidal activity of materials produced by the two methods were compared. The larvicidal activity was checked against early fourth instars Aedes aegypti larvae. Results showed that the freeze-dried powders retained the larvicidal activity fairly well. The spray-dried powder moderately lost its larvicidal activity at different inlet temperatures. Between the two types of atomization, centrifugal atomization retained more activity than the nozzle type atomization. Optimum inlet temperature for both centrifugal and nozzle atomization was 160 degrees C. Keeping the outlet temperature constant at 70 degrees C the moisture contents for the spray-dried powders through centrifugal atomization and freeze-dried powders were 10.23% and 11.80%, respectively. The LC(50) values for the spray-dried and freeze-dried powders were 17.42 and 16.18 ng/mL, respectively. Spore count of materials before drying was 3 x 10(10) cfu/mL and after spray drying through nozzle and centrifugal atomization at inlet and outlet temperature of 160 degrees C/70 degrees C were 2.6 x 10(9) and 5.0 x 10(9) cfu/mL, respectively.
Colombo, Tommaso; Garcìa, Pedro Javier; Vandelli, Wainer
2016-01-01
The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. The main performance issues brought about by this workload are addressed in this paper, focusing in particular on the so-called TCP incast pathology. Since performing systematic stud...
Very Large Scale Integration (VLSI).
Yeaman, Andrew R. J.
Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Meullemiestre, A; Petitcolas, E; Maache-Rezzoug, Z; Chemat, F; Rezzoug, S A
2016-01-01
Maritime pine sawdust, a by-product from industry of wood transformation, has been investigated as a potential source of polyphenols which were extracted by ultrasound-assisted maceration (UAM). UAM was optimized for enhancing extraction efficiency of polyphenols and reducing time-consuming. In a first time, a preliminary study was carried out to optimize the solid/liquid ratio (6g of dry material per mL) and the particle size (0.26 cm(2)) by conventional maceration (CVM). Under these conditions, the optimum conditions for polyphenols extraction by UAM, obtained by response surface methodology, were 0.67 W/cm(2) for the ultrasonic intensity (UI), 40°C for the processing temperature (T) and 43 min for the sonication time (t). UAM was compared with CVM, the results showed that the quantity of polyphenols was improved by 40% (342.4 and 233.5mg of catechin equivalent per 100g of dry basis, respectively for UAM and CVM). A multistage cross-current extraction procedure allowed evaluating the real impact of UAM on the solid-liquid extraction enhancement. The potential industrialization of this procedure was implemented through a transition from a lab sonicated reactor (3 L) to a large scale one with 30 L volume.
Vishniac, Ethan T.
2015-01-01
We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.
Large-scale circuit simulation
Wei, Y. P.
1982-12-01
The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.
Japanese large-scale interferometers
Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K
2002-01-01
The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.
Energy Technology Data Exchange (ETDEWEB)
Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics
1998-12-31
In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)
Optimization of Ventilation System in Large-scale Mechanized Metal Mine%大型机械化金属矿山通风系统优化
Institute of Scientific and Technical Information of China (English)
龚开福; 李夕兵; 李国元; 时增辉
2015-01-01
In order to improve the underground ventilation effect in the large-scale trackless mechanized mine, the air volume required by trackless equipment in running was analyzed. Compared with the minimum dusting air speed in working face,and the air volume as maximum personals working at the same time,the minimum underground air-supply volume was de-termined;Ventilation network graph for underground mine was constructed based on Vensim software,and the wind flow path, fan parameters and structures were dynamically adjusted to realize more air volume for more trackless equipment,so as to a-chieve the purpose of optimizing the mine ventilation system. The ventilation effect in a large-scale trackless mechanized mine in Guizhou showed that the trackless equipment needs most of air volume,which are considered as the minimum ventilation vol-ume of underground mine;In combination with Vensim ventilation network diagram,the positions of the fan and the structures were determined,and the air door opening and fan speed were dynamically adjusted to optimize the wind flow path and air vol-ume;The rigid duct can greatly reduce the ventilation resistance. The optimization from several aspects above can improve ven-tilation effect of the large mechanized underground metal mine.%为了改善大型无轨机械化矿山井下通风效果，分析了无轨设备运行时的需风量，并与工作面最小排尘风速、井下同时工作的最多人数需风量相比对，确定了井下最少供风量；基于Vensim通风软件构建了井下通风网络图，并对风流路径、风机参数、构筑物进行动态调节，使无轨设备相对集中的地方得到更多的风量，从而达到井下通风系统优化的目的。对贵州某大型无轨机械化矿山通风效果研究结果表明：无轨设备需风量最大，以此风量作为井下最少供风量；结合Vensim通风网络图，确定通风机与风构筑物的位置，并调节风门开度与风机转速，对
Directory of Open Access Journals (Sweden)
EMAN A. TORA
2016-07-01
Full Text Available In this paper, a multi- objective optimization approach is introduced to define a hybrid power supply system for a large scale RO- desalination plant. The target is to integrate a number of locally available energy resources to generate the electricity demand of the RO- desalination plant with minimizing both the electricity generation cost and the greenhouse gas emissions whereby carbon dioxide sequestration may be an option. The considered energy resources and technologies are wind turbines, solar PV, combined cycles with natural gas turbines, combined cycles with coal gasification, pulverized coal with flue gas desulfurization, and biomass combined heat and power CHP. These variable energy resources are investigated under different constraints on the renewable energy contribution. Likewise, the effect of carbon dioxide sequestration is included. Accordingly, five scenarios have been analyzed. Trade- offs between the minimum electricity generation cost and the minimum greenhouse gas emissions have been determined and represented in Pareto curves using the constraint method (. The results highlight that among the studied fossil fuel technologies, the integrated combined cycle natural gas turbines can provide considerable fraction of the needed power supply. Likewise, wind turbines are the most effective technology among renewable energy options. When CO2 sequestration applied, the costs increase and significant changes in the optimum combination of renewable energy resources have been monitored. In that case, solar PV starts to appreciably compete. The optimum mix of energy resources extends to include biomass CHP as well.
Strings and large scale magnetohydrodynamics
Olesen, P
1995-01-01
From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.
Testing gravity on Large Scales
Raccanelli Alvise
2013-01-01
We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...
Optimal scaling in ductile fracture
Fokoua Djodom, Landry
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity
Models of large scale structure
Energy Technology Data Exchange (ETDEWEB)
Frenk, C.S. (Physics Dept., Univ. of Durham (UK))
1991-01-01
The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).
Optimal scales in weighted networks
Garlaschelli, Diego; Fink, Thomas M A; Caldarelli, Guido
2013-01-01
The analysis of networks characterized by links with heterogeneous intensity or weight suffers from two long-standing problems of arbitrariness. On one hand, the definitions of topological properties introduced for binary graphs can be generalized in non-unique ways to weighted networks. On the other hand, even when a definition is given, there is no natural choice of the (optimal) scale of link intensities (e.g. the money unit in economic networks). Here we show that these two seemingly independent problems can be regarded as intimately related, and propose a common solution to both. Using a formalism that we recently proposed in order to map a weighted network to an ensemble of binary graphs, we introduce an information-theoretic approach leading to the least biased generalization of binary properties to weighted networks, and at the same time fixing the optimal scale of link intensities. We illustrate our method on various social and economic networks.
Large scale cluster computing workshop
Energy Technology Data Exchange (ETDEWEB)
Dane Skow; Alan Silverman
2002-12-23
Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.
Institute of Scientific and Technical Information of China (English)
汪靖; 吴志健
2011-01-01
本文通过对传统粒子群算法(PSO)的分析,在GPU(Graphic Process Unit)上设计了基于一般反向学习策略的粒子群算法,并用于求解大规模优化问题.主要思想是通过一般反向学习策略转化当前解空间,提高算法找到最优解的几率,同时使用GPU大量线程并行来加速收敛速度.对比数值实验表明,对于求解大规模高维的优化问题,本文算法比其他智能算法具有更好的精度和更快的收敛速度.%Through an analysis of the traditional particle swarm algorithm, this paper presents particle swarm algorithm based on the generalized opposition-based particle (GOBL) swarm algorithm on Graphic Processing Unit (GPU), and applies it to solve large scale optimization problem.The generalized opposition learning strategies transforms the current solution space to provide more chances of finding better solutions, and GPU in parallel accelerates the convergence rate.Experiment shows that this algorithm has better accuracy and convergence speed than other algorithm for solving large-scale and high-dimensional problems.
Desjacques, Vincent; Schmidt, Fabian
2016-01-01
This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...
Directory of Open Access Journals (Sweden)
Steinhaus Thomas
2007-01-01
Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.
DEFF Research Database (Denmark)
Li, Ning; Kubis, Peter; Forberich, Karen
2014-01-01
We report on a novel approach including: 1. the design of an efficient intermediate layer, which facilitates the use of most high performance active materials in tandem structure and the compatibility of the tandem concept with large-scale production; 2. the concept of ternary composites based on...
Large scale biomimetic membrane arrays
DEFF Research Database (Denmark)
Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg
2009-01-01
To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...
Testing gravity on Large Scales
Directory of Open Access Journals (Sweden)
Raccanelli Alvise
2013-09-01
Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.
Colloquium: Large scale simulations on GPU clusters
Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano
2015-06-01
Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
DEFF Research Database (Denmark)
Soleimani, Hamed; Kannan, Govindan
2015-01-01
-heuristic algorithms are considered to develop a new elevated hybrid algorithm: the genetic algorithm (GA) and particle swarm optimization (PSO). Analyzing the above-mentioned algorithms' strengths and weaknesses leads us to attempt to improve the GA using some aspects of PSO. Therefore, a new hybrid algorithm...... is proposed and a complete validation process is undertaken using CPLEX and MATLAB software. In small instances, the global optimum points of CPLEX for the proposed hybrid algorithm are compared to genetic algorithm, and particle swarm optimization. Then, in small, mid, and large-size instances, performances...... of the proposed meta-heuristics are analyzed and evaluated. Finally, a case study involving an Iranian hospital furniture manufacturer is used to evaluate the proposed solution approach. The results reveal the superiority of the proposed hybrid algorithm when compared to the GA and PSO....
Grid sensitivity capability for large scale structures
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
Large-Scale Information Systems
Energy Technology Data Exchange (ETDEWEB)
D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura
2000-12-01
Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.
Economically viable large-scale hydrogen liquefaction
Cardella, U.; Decker, L.; Klein, H.
2017-02-01
The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.
Large Scale Magnetostrictive Valve Actuator
Richard, James A.; Holleman, Elizabeth; Eddleman, David
2008-01-01
Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.
The Cosmology Large Angular Scale Surveyor
Marriage, Tobias; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.
2014-01-01
Some of the most compelling inflation models predict a background of primordial gravitational waves (PGW) detectable by their imprint of a curl-like "B-mode" pattern in the polarization of the Cosmic Microwave Background (CMB). The Cosmology Large Angular Scale Surveyor (CLASS) is a novel array of telescopes to measure the B-mode signature of the PGW. By targeting the largest angular scales (>2°) with a multifrequency array, novel polarization modulation and detectors optimized for both control of systematics and sensitivity, CLASS sets itself apart in the field of CMB polarization surveys and opens an exciting new discovery space for the PGW and inflation. This poster presents an overview of the CLASS project.
Handbook of Large-Scale Random Networks
Bollobas, Bela; Miklos, Dezso
2008-01-01
Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas
Conundrum of the Large Scale Streaming
Malm, T M
1999-01-01
The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?
Directory of Open Access Journals (Sweden)
Richard Schuster
2013-10-01
Full Text Available Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity
Tensor methods for large, sparse unconstrained optimization
Energy Technology Data Exchange (ETDEWEB)
Bouaricha, A.
1996-11-01
Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.
The Large-Scale Polarization Explorer (LSPE)
Aiola, S; Battaglia, P; Battistelli, E; Baù, A; de Bernardis, P; Bersanelli, M; Boscaleri, A; Cavaliere, F; Coppolecchia, A; Cruciani, A; Cuttaia, F; Addabbo, A D'; D'Alessandro, G; De Gregori, S; Del Torto, F; De Petris, M; Fiorineschi, L; Franceschet, C; Franceschi, E; Gervasi, M; Goldie, D; Gregorio, A; Haynes, V; Krachmalnicoff, N; Lamagna, L; Maffei, B; Maino, D; Masi, S; Mennella, A; Wah, Ng Ming; Morgante, G; Nati, F; Pagano, L; Passerini, A; Peverini, O; Piacentini, F; Piccirillo, L; Pisano, G; Ricciardi, S; Rissone, P; Romeo, G; Salatino, M; Sandri, M; Schillaci, A; Stringhetti, L; Tartari, A; Tascone, R; Terenzi, L; Tomasi, M; Tommasi, E; Villa, F; Virone, G; Withington, S; Zacchei, A; Zannoni, M
2012-01-01
The LSPE is a balloon-borne mission aimed at measuring the polarization of the Cosmic Microwave Background (CMB) at large angular scales, and in particular to constrain the curl component of CMB polarization (B-modes) produced by tensor perturbations generated during cosmic inflation, in the very early universe. Its primary target is to improve the limit on the ratio of tensor to scalar perturbations amplitudes down to r = 0.03, at 99.7% confidence. A second target is to produce wide maps of foreground polarization generated in our Galaxy by synchrotron emission and interstellar dust emission. These will be important to map Galactic magnetic fields and to study the properties of ionized gas and of diffuse interstellar dust in our Galaxy. The mission is optimized for large angular scales, with coarse angular resolution (around 1.5 degrees FWHM), and wide sky coverage (25% of the sky). The payload will fly in a circumpolar long duration balloon mission during the polar night. Using the Earth as a giant solar sh...
Optimal scaling of paired comparison data
van de Velden, M.
2004-01-01
In this paper we consider the analysis of paired comparisons using optimal scaling techniques. In particular, we will, inspired by Guttman's approach for quantifying paired comparisons, formulate a new method to obtain optimal scaling values for the subjects. We will compare our results with those o
Sensitivity technologies for large scale simulation.
Energy Technology Data Exchange (ETDEWEB)
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first
Sensitivity technologies for large scale simulation.
Energy Technology Data Exchange (ETDEWEB)
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first
Large-Scale Damage Control Facility
Federal Laboratory Consortium — FUNCTION: Performs largeâscale fire protection experiments that simulate actual Navy platform conditions. Remote control firefighting systems are also tested....
Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.
2017-01-01
Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800
Institute of Scientific and Technical Information of China (English)
秦小玉; 杨乔
2013-01-01
In this paper,in accordance with the general characteristics of the logistics operation of large-scale civil engineering projects,we analyzed the material purchasing,warehousing management and distribution of large-scale civil engineering projects and then proposed the optimizing measures to segment the material purchasing management mechanism,improve warehousing operation level,and enhance the material distribution system.%根据大型土木工程的一般物流运作特点,对大型土木工程的物资采购、仓储管理与配送进行了分析研究,提出了细化物资采购管理控制、提升仓储作业水平、优化物资配送体系等优化措施.
Energy Technology Data Exchange (ETDEWEB)
Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Advanced Energy Systems
1998-10-01
Solar heating market is growing in many European countries and annually installed collector area has exceeded one million square meters. There are dozens of collector manufacturers and hundreds of firms making solar heating installations in Europe. One tendency in solar heating is towards larger systems. These can be roof integrated, consisting of some tens or hundreds of square meters of collectors, or they can be larger centralized solar district heating plants consisting of a few thousand square meters of collectors. The increase of size can reduce the specific investments of solar heating systems, because e.g. the costs of some components (controllers, pumps, and pipes), planning and installation can be smaller in larger systems. The solar heat output can also be higher in large systems, because more advanced technique is economically viable
Large Scale Glazed Concrete Panels
DEFF Research Database (Denmark)
Bache, Anja Margrethe
2010-01-01
Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... existing buildings in and around Copenhagen that are covered with mosaic tiles or glazed tiles; buildings such as Nanna Ditzel’s House in Klareboderne, Arne Jacobsen’s gas station, Erik Møller’s Industriens Hus, Bent Helweg Møller’s Berlingske Hus, Arne Jacobsen’s Stellings Hus and Toms Chocolate Factories...... and finally Lene Tranberg and Bøje Lungård’s Elsinore water purification plant. These buildings have qualities that I would like applied, perhaps transformed or most preferably, if possible, interpreted anew, for the large glazed concrete panels I shall develop. The article is ended and concluded...
Large scale mechanical metamaterials as seismic shields
Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.
2016-08-01
Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.
Gravitational redshifts from large-scale structure
Croft, Rupert A C
2013-01-01
The recent measurement of the gravitational redshifts of galaxies in galaxy clusters by Wojtak et al. has opened a new observational window on dark matter and modified gravity. By stacking clusters this determination effectively used the line of sight distortion of the cross-correlation function of massive galaxies and lower mass galaxies to estimate the gravitational redshift profile of clusters out to 4 Mpc/h. Here we use a halo model of clustering to predict the distortion due to gravitational redshifts of the cross-correlation function on scales from 1 - 100 Mpc/h. We compare our predictions to simulations and use the simulations to make mock catalogues relevant to current and future galaxy redshift surveys. Without formulating an optimal estimator, we find that the full BOSS survey should be able to detect gravitational redshifts from large-scale structure at the ~4 sigma level. Upcoming redshift surveys will greatly increase the number of galaxies useable in such studies and the BigBOSS and Euclid exper...
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Large-scale perspective as a challenge
Plomp, M.G.A.
2012-01-01
1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Inflation, large scale structure and particle physics
Indian Academy of Sciences (India)
S F King
2004-02-01
We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.
Large Scale Metal Additive Techniques Review
Energy Technology Data Exchange (ETDEWEB)
Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL
2016-01-01
In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.
Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.
2017-01-01
We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.
Large-scale sequential quadratic programming algorithms
Energy Technology Data Exchange (ETDEWEB)
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
Large scale network-centric distributed systems
Sarbazi-Azad, Hamid
2014-01-01
A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu
Network robustness under large-scale attacks
Zhou, Qing; Liu, Ruifang; Cui, Shuguang
2014-01-01
Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.
DEFF Research Database (Denmark)
Pappalardo, F.; Halling-Brown, M. D.; Rapin, Nicolas;
2009-01-01
Vaccine research is a combinatorial science requiring computational analysis of vaccine components, formulations and optimization. We have developed a framework that combines computational tools for the study of immune function and vaccine development. This framework, named ImmunoGrid combines...... conceptual models of the immune system, models of antigen processing and presentation, system-level models of the immune system, Grid computing, and database technology to facilitate discovery, formulation and optimization of vaccines. ImmunoGrid modules share common conceptual models and ontologies....... The ImmunoGrid portal offers access to educational simulators where previously defined cases can be displayed, and to research simulators that allow the development of new, or tuning of existing, computational models. The portal is accessible at http://www.w3.org....
Large-scale dynamics of magnetic helicity
Linkmann, Moritz; Dallas, Vassilios
2016-11-01
In this paper we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a nonlocal inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic field. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.
基于多目标优化的大型项目任务分配模型%Large-scale project assignment model based on multi-objective optimization
Institute of Scientific and Technical Information of China (English)
刘建生; 孙彦武
2013-01-01
The multi-objective, multi-variable, multi-constraint and large combinations of large-scale project assignments make task allocation time -consuming and laborious. In this article, based on multi -objective optimization, two task allocation models of large-scale project using the method of time series analysis and the combination algorithm of greedy algorithm and multi-objective optimization are established. One model sets the most profitable and shorter delay as its goal, and the other sets no-delay and the maximum profit as its goal. A task allocation of large-scale project is determined by using these two models. The simulation results show that these models and the algorithm can reduce the time complexity of large-scale project assignments and deal with the problems of task allocation being time-consuming and laborious.%大型项目任务分配问题的多目标、多变量、多约束、大组合等特点,使得任务分配费时、费力.文中分别以利润最大且延期较短和无延期且利润最大为目标,利用时间序列分析方法,将贪婪算法和多目标优化问题相结合,建立两个基于多目标优化的大型项目任务分配模型.利用两个模型分别确定某大型项目的任务分配方案,仿真结果表明:两个模型和算法均最大限度降低了大型项目任务分配问题的时间复杂度,有效地解决了任务分配费时、费力这一问题.
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Large scale-small scale duality and cosmological constant
Darabi, F
1999-01-01
We study a model of quantum cosmology originating from a classical model of gravitation where a self interacting scalar field is coupled to gravity with the metric undergoing a signature transition. We show that there are dual classical signature changing solutions, one at large scales and the other at small scales. It is possible to fine-tune the physics in both scales with an infinitesimal effective cosmological constant.
Ultra-Large-Scale Systems: Scale Changes Everything
2008-03-06
Statistical Mechanics, Complexity Networks Are Everywhere Recurring “scale free” structure • internet & yeast protein structures Analogous dynamics...Design • Design Representation and Analysis • Assimilation • Determining and Managing Requirements 43 Ultra-Large-Scale Systems Linda Northrop: March
Large-scale Complex IT Systems
Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard
2011-01-01
This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.
The Cosmology Large Angular Scale Surveyor (CLASS)
Eimer, Joseph; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Marriage, T.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.
2014-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is an array of telescopes designed to search for the signature of inflation in the polarization of the Cosmic Microwave Background (CMB). By combining the strategy of targeting large scales (>2 deg) with novel front-end polarization modulation and novel detectors at multiple frequencies, CLASS will pioneer a new frontier in ground-based CMB polarization surveys. In this talk, I give an overview of the CLASS instrument, survey, and outlook on setting important new limits on the energy scale of inflation.
Evaluating Large-Scale Interactive Radio Programmes
Potter, Charles; Naidoo, Gordon
2009-01-01
This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…
Computing in Large-Scale Dynamic Systems
Pruteanu, A.S.
2013-01-01
Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem
Topological Routing in Large-Scale Networks
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun
2004-01-01
A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...
Topological Routing in Large-Scale Networks
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun
A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...
Institute of Scientific and Technical Information of China (English)
倪爱晶; 郑联语
2011-01-01
For large-scale measuring system configuration, the solution to configure measurement systems based on form error uncertainty of measurement task is presented. The point simulation and data fusion from multiple instruments based on Monte Carlo are studied. Based on form error mathematical model, particle swarm optimization method is adopted to solve form error and Monte Carlo method is used to simulate and evaluate form error uncertainty. Finally, the simulation measurement test of a large frame between satellite cabins has been carried out The test results indicated that the proposed method for optimal measurement systems configuration based on form error uncertainty is effective. This method is able to provide a solution guide for rapid shop-floor deployment of large-scale measurement systems.%针对大尺寸测量系统部署问题,提出了面向测量任务的以形状误差不确定度为评价指标的优化配置测量系统的方法.对于基于蒙特卡罗仿真方法的测量点仿真和多测量仪器数据融合进行了研究.在建立形状误差评定模型基础上,提出并实现了基于粒子群算法的形状误差评定模型的求解及基于蒙特卡罗法的形状误差不确定度计算方法.通过某卫星舱段端框的仿真试验,验证了以不确定度为指标进行大尺寸测量系统配置方法的有效性,可为大尺寸测量系统现场快速部署提供方案指导.
Selection and optimization of push plate in large- scale immersed tube%大型沉管顶推滑板选型与优化
Institute of Scientific and Technical Information of China (English)
王李; 刘然; 范卓凡
2015-01-01
The pushing construction technology is mature, and widely used in bridge engineering. However, the pushing technology for prefabricated immersed tube is different from the traditional pushing construction method, it is a construction technology of the multi point supporting synchronous pushing, and the synchronization push and equipment requirements are higher, especially for the selection of push plate, which is directly related to the safety and quality of construction to push the immersed tube. Through repeated selection and structure optimization of the push plate, we ensured the safety and efficiency for pushing construction of the Hongkong-Zhuhai-Macao Bridge.%顶推施工法工艺成熟，广泛应用于桥梁工程中，而预制沉管顶推工艺不同于传统的顶推施工法，多点支撑同步顶推的施工工艺，对顶推的同步性、设备的要求更高，尤其是顶推滑板的选型，直接关系到顶推施工中沉管的安全与质量。通过多次对顶推滑板进行选型与结构优化，保证了港珠澳大桥沉管顶推施工的安全及工效。
Neutrino footprint in Large Scale Structure
Jimenez, Raul; Verde, Licia
2016-01-01
Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys, implying a direct determination of the absolute neutrino mass scale. The measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. Detection of a lack of small-scale power, however, could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties can be related to the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature can not be easily mimicked by systematic uncertainties or modifications in ...
Large-scale instabilities of helical flows
Cameron, Alexandre; Brachet, Marc-Étienne
2016-01-01
Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...
Transition from large-scale to small-scale dynamo.
Ponty, Y; Plunian, F
2011-04-15
The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.
Analysis on the Optimized Layout Arrangement of Large-scale Pithead Power Plant%大型坑口电厂厂区总平面优化布置案例分析
Institute of Scientific and Technical Information of China (English)
王炯
2015-01-01
通过对某大型坑口电厂的厂区总平面优化布置进行分析后，认为应综合考虑煤矿井田位置、来煤方向、出线方向、规划容量、厂区地形地貌、地质条件等各种因素，做到经济、合理、安全、清洁生产。%According to the analysis of optimized layout arrangement of large-scale pithead power plant,it is proposed to take into account the factors including the minefield location,coal conveying line,outgoing direction,planned capacity and the land form to work economically,reasonably,safely and environmental-friendly.
OPTIMIZATION OF LIQUID NITROGEN WASH PROCESS IN LARGE-SCALED AMMONIA PLANT%大型合成氨装置液氮洗工艺流程的优化
Institute of Scientific and Technical Information of China (English)
任多胜
2011-01-01
Based upon the main scheme for gas refining process in present domestic large-scaled ammonia plant and in combination with the several issues considered importantly in the course of optimization of liquid nitrogen wash process for the enterprise the issue%根据目前国内大型合成氨装置中气体精制工艺的主要方案，结合本企业液氮洗工艺流程在优化过程中所着重考虑的几个问题，主要阐明了流程选择过程注意的问题。
Large-scale simulations of reionization
Energy Technology Data Exchange (ETDEWEB)
Kohler, Katharina; /JILA, Boulder /Fermilab; Gnedin, Nickolay Y.; /Fermilab; Hamilton, Andrew J.S.; /JILA, Boulder
2005-11-01
We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.
Accelerating sustainability in large-scale facilities
Marina Giampietro
2011-01-01
Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October. Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...
Large-scale structure of the Universe
Energy Technology Data Exchange (ETDEWEB)
Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Ya.B. (Inst. Prikladnoj Matematiki, Moscow, USSR)
1983-01-01
A review of theory of the large-scale structure of the Universe is given, including formation of clusters and superclusters of galaxies as well as large voids. Particular attention is paid to the theory of neutrino dominated Universe - the cosmological model where neutrinos with the rest mass of several tens eV dominate the mean density. Evolution of small perturbations is discussed, estimates of microwave backgorund radiation fluctuations is given for different angular scales. Adiabatic theory of the Universe structure formation, known as ''cake'' scenario and their successive fragmentation is given. This scenario is based on approximate nonlinear theory of gravitation instability. Results of numerical experiments, modeling the processes of large-scale structure formation are discussed.
Large-scale networks in engineering and life sciences
Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai
2014-01-01
This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines. The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...
Image-based Exploration of Large-Scale Pathline Fields
Nagoor, Omniah H.
2014-05-27
While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.
Linux software for large topology optimization problems
DEFF Research Database (Denmark)
evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...
Large-Scale Analysis of Art Proportions
DEFF Research Database (Denmark)
Jensen, Karl Kristoffer
2014-01-01
While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...
Large scale topic modeling made practical
DEFF Research Database (Denmark)
Wahlgreen, Bjarne Ørum; Hansen, Lars Kai
2011-01-01
Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....
Large-scale multimedia modeling applications
Energy Technology Data Exchange (ETDEWEB)
Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.
1995-08-01
Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.
[Issues of large scale tissue culture of medicinal plant].
Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai
2014-09-01
In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.
Generation Expansion Planning Considering Integrating Large-scale Wind Generation
DEFF Research Database (Denmark)
Zhang, Chunyu; Ding, Yi; Østergaard, Jacob
2013-01-01
Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...
Large-scale neuromorphic computing systems
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Optimal aggregation of noisy observations: A large deviations approach
Energy Technology Data Exchange (ETDEWEB)
Murayama, Tatsuto; Davis, Peter, E-mail: murayama@cslab.kecl.ntt.co.j, E-mail: davis@cslab.kecl.ntt.co.j [NTT Communication Science Laboratories, NTT Corporation, 2-4, Hikaridai, Seika-cho, Keihanna, Kyoto 619-0237 (Japan)
2010-06-01
Sensing and data aggregation tasks in distributed systems should not be considered as separate issues. The quality of collective estimation involves a fundamental tradeoff between sensing quality, which can be increased by increasing the number of sensors, and aggregation quality under a given capacity of the network, which decreases if the number of sensors is too large. In this paper, we examine a system level strategy for optimal aggregation of data from an ensemble of independent sensors. In particular, we consider large scale aggregation from very many sensors, in which case the network capacity diverges to infinity. Then, by applying the large deviations techniques, we conclude the following significant result: larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. At a critical value of noise, there is an abrupt change in the behavior of a parameter characterizing the aggregation strategy, similar to a phase transition in statistical physics.
Configuration management in large scale infrastructure development
Rijn, T.P.J. van; Belt, H. van de; Los, R.H.
2000-01-01
Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Ensemble methods for large scale inverse problems
Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.
2013-01-01
Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t
DEFF Research Database (Denmark)
Arler, Finn
2006-01-01
, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...
Quantum Signature of Cosmological Large Scale Structures
Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio
1998-01-01
We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.
Large-scale structure of the universe
Energy Technology Data Exchange (ETDEWEB)
Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Y.B.
1983-01-01
A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe: a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a low-pressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.
Neutrino footprint in large scale structure
Garay, Carlos Peña; Verde, Licia; Jimenez, Raul
2017-03-01
Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.
Galaxy alignment on large and small scales
Kang, X.; Lin, W. P.; Dong, X.; Wang, Y. O.; Dutton, A.; Macciò, A.
2016-10-01
Galaxies are not randomly distributed across the universe but showing different kinds of alignment on different scales. On small scales satellite galaxies have a tendency to distribute along the major axis of the central galaxy, with dependence on galaxy properties that both red satellites and centrals have stronger alignment than their blue counterparts. On large scales, it is found that the major axes of Luminous Red Galaxies (LRGs) have correlation up to 30Mpc/h. Using hydro-dynamical simulation with star formation, we investigate the origin of galaxy alignment on different scales. It is found that most red satellite galaxies stay in the inner region of dark matter halo inside which the shape of central galaxy is well aligned with the dark matter distribution. Red centrals have stronger alignment than blue ones as they live in massive haloes and the central galaxy-halo alignment increases with halo mass. On large scales, the alignment of LRGs is also from the galaxy-halo shape correlation, but with some extent of mis-alignment. The massive haloes have stronger alignment than haloes in filament which connect massive haloes. This is contrary to the naive expectation that cosmic filament is the cause of halo alignment.
Galaxy alignment on large and small scales
Kang, X; Wang, Y O; Dutton, A; Macciò, A
2014-01-01
Galaxies are not randomly distributed across the universe but showing different kinds of alignment on different scales. On small scales satellite galaxies have a tendency to distribute along the major axis of the central galaxy, with dependence on galaxy properties that both red satellites and centrals have stronger alignment than their blue counterparts. On large scales, it is found that the major axes of Luminous Red Galaxies (LRGs) have correlation up to 30Mpc/h. Using hydro-dynamical simulation with star formation, we investigate the origin of galaxy alignment on different scales. It is found that most red satellite galaxies stay in the inner region of dark matter halo inside which the shape of central galaxy is well aligned with the dark matter distribution. Red centrals have stronger alignment than blue ones as they live in massive haloes and the central galaxy-halo alignment increases with halo mass. On large scales, the alignment of LRGs is also from the galaxy-halo shape correlation, but with some ex...
Large-Scale PV Integration Study
Energy Technology Data Exchange (ETDEWEB)
Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris
2011-07-29
This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.
Large-Scale Collective Entity Matching
Rastogi, Vibhor; Garofalakis, Minos
2011-01-01
There have been several recent advancements in Machine Learning community on the Entity Matching (EM) problem. However, their lack of scalability has prevented them from being applied in practical settings on large real-life datasets. Towards this end, we propose a principled framework to scale any generic EM algorithm. Our technique consists of running multiple instances of the EM algorithm on small neighborhoods of the data and passing messages across neighborhoods to construct a global solution. We prove formal properties of our framework and experimentally demonstrate the effectiveness of our approach in scaling EM algorithms.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors.
Directory of Open Access Journals (Sweden)
Julia Litzlbauer
Full Text Available Biosensors based on Förster Resonance Energy Transfer (FRET between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors.
Stabilization Algorithms for Large-Scale Problems
DEFF Research Database (Denmark)
Jensen, Toke Koldborg
2006-01-01
The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...
The large-scale structure of vacuum
Albareti, F D; Maroto, A L
2014-01-01
The vacuum state in quantum field theory is known to exhibit an important number of fundamental physical features. In this work we explore the possibility that this state could also present a non-trivial space-time structure on large scales. In particular, we will show that by imposing the renormalized vacuum energy-momentum tensor to be conserved and compatible with cosmological observations, the vacuum energy of sufficiently heavy fields behaves at late times as non-relativistic matter rather than as a cosmological constant. In this limit, the vacuum state supports perturbations whose speed of sound is negligible and accordingly allows the growth of structures in the vacuum energy itself. This large-scale structure of vacuum could seed the formation of galaxies and clusters very much in the same way as cold dark matter does.
Growth Limits in Large Scale Networks
DEFF Research Database (Denmark)
Knudsen, Thomas Phillip
the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...
Process Principles for Large-Scale Nanomanufacturing.
Behrens, Sven H; Breedveld, Victor; Mujica, Maritza; Filler, Michael A
2017-06-07
Nanomanufacturing-the fabrication of macroscopic products from well-defined nanoscale building blocks-in a truly scalable and versatile manner is still far from our current reality. Here, we describe the barriers to large-scale nanomanufacturing and identify routes to overcome them. We argue for nanomanufacturing systems consisting of an iterative sequence of synthesis/assembly and separation/sorting unit operations, analogous to those used in chemicals manufacturing. In addition to performance and economic considerations, phenomena unique to the nanoscale must guide the design of each unit operation and the overall process flow. We identify and discuss four key nanomanufacturing process design needs: (a) appropriately selected process break points, (b) synthesis techniques appropriate for large-scale manufacturing, (c) new structure- and property-based separations, and (d) advances in stabilization and packaging.
Condition Monitoring of Large-Scale Facilities
Hall, David L.
1999-01-01
This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.
Wireless Secrecy in Large-Scale Networks
Pinto, Pedro C; Win, Moe Z
2011-01-01
The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.
ELASTIC: A Large Scale Dynamic Tuning Environment
Directory of Open Access Journals (Sweden)
Andrea Martínez
2014-01-01
Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.
Institute of Scientific and Technical Information of China (English)
涂伟; 李清泉; 方志祥
2014-01-01
Due to multiGconstraints and multiGobjectives,the optimization for large scale multiGdepot logistics routing problem is very difficult.A spatial heuristics algorithm is proposed based on the network Voronoi diagram.From the spatial perspective,two involved spatial issues in the multiGdepot logistics routing problem are service area partition and routing optimization.By using of depots’network Voronoi diagram,service area is coarsely partitioned and refined according to the goods storage in each depot.For the routing optimization,the local search space is limited within the spatial neighbors of customers.The proposed heuristics minimizes the used vehicles number and the total routes length.An experiment on several large scale logistics distribution instances in Shenzhen,China was implemented to validate the performance of the proposed heuristics algorithm.Results indicated that it provided high quality solution for large scale instances with 6400 customers in no more than 15 minutes.The proposed heuristics algorithm could be widely used in eGcommerce,express delivery,public utility in city to promote logistics efficiency.%由于存在多约束和多个优化目标，物流配送决策非常困难.本文针对城市多仓库物流配送问题，提出基于网络Voronoi 图的空间启发式优化方法.从空间角度将多仓库物流配送优化分解为区域分割和路径优化两个空间子问题.基于网络Voronoi 覆盖进行服务区域初始划分，顾及仓库容量差异，进行区域边界修正，并创建初始解.路径优化将局部搜索范围限定在网络 K近邻内，只搜索最有可能的空间邻域，迭代改进解的质量.该算法最小化路径数量和路径长度.利用深圳市的大规模多仓库物流配送问题测试算法性能.试验结果表明：本文方法能够在15 min 内求解6400个客户点的大规模物流配送问题，解的质量优于ArcGIS<10．8％，计算时间约为其21．2％.
Measuring Bulk Flows in Large Scale Surveys
Feldman, H A; Feldman, Hume A.; Watkins, Richard
1993-01-01
We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.
Statistical characteristics of Large Scale Structure
Demianski; Doroshkevich
2002-01-01
We investigate the mass functions of different elements of the Large Scale Structure -- walls, pancakes, filaments and clouds -- and the impact of transverse motions -- expansion and/or compression -- on their statistical characteristics. Using the Zel'dovich theory of gravitational instability we show that the mass functions of all structure elements are approximately the same and the mass of all elements is found to be concentrated near the corresponding mean mass. At high redshifts, both t...
Topologies for large scale photovoltaic power plants
Cabrera Tobar, Ana; Bullich Massagué, Eduard; Aragüés Peñalba, Mònica; Gomis Bellmunt, Oriol
2016-01-01
© 2016 Elsevier Ltd. All rights reserved. The concern of increasing renewable energy penetration into the grid together with the reduction of prices of photovoltaic solar panels during the last decade have enabled the development of large scale solar power plants connected to the medium and high voltage grid. Photovoltaic generation components, the internal layout and the ac collection grid are being investigated for ensuring the best design, operation and control of these power plants. This ...
Optimal access to large databases via networks
Energy Technology Data Exchange (ETDEWEB)
Munro, J.K.; Fellows, R.L.; Phifer, D. Carrick, M.R.; Tarlton, N.
1997-10-01
A CRADA with Stephens Engineering was undertaken in order to transfer knowledge and experience about access to information in large text databases, with results of queries and searches provided using the multimedia capabilities of the World Wide Web. Data access is optimized by the use of intelligent agents. Technology Logic Diagram documents published for the DOE facilities in Oak Ridge (K-25, X-10, Y-12) were chosen for this effort because of the large number of technologies identified, described, evaluated, and ranked for possible use in the environmental remediation of these facilities. Fast, convenient access to this information is difficult because of the volume and complexity of the data. WAIS software used to provide full-text, field-based search capability can also be used, through the development of an appropriate hierarchy of menus, to provide tabular summaries of technologies satisfying a wide range of criteria. The menu hierarchy can also be used to regenerate dynamically many of the tables that appeared in the original hardcopy publications, all from a single text database of the technology descriptions. Use of the Web environment permits linking many of the Technology Logic Diagram references to on-line versions of these publications, particularly the DOE Orders and related directives providing the legal requirements that were the basis for undertaking the Technology Logic Diagram studies in the first place.
Large-Scale Visual Data Analysis
Johnson, Chris
2014-04-01
Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.
NON-SCALING FIXED FIELD GRADIENT OPTIMIZATION.
Energy Technology Data Exchange (ETDEWEB)
TRBOJEVIC, D.
2004-10-13
Optimization of the non-scaling FFAG lattice for the specific application of the muon acceleration with respect to the minimum orbit offsets, minimum path length and smallest circumference is described. The short muon lifetime requires fast acceleration. The acceleration is in this work assumed to be with super-conducting cavities. This sets up a condition of acceleration at the top of the sinusoidal RF wave.
Dark Matter on small scales; Telescopes on large scales
Gilmore, G
2007-01-01
This article reviews recent progress in observational determination of the properties of dark matter on small astrophysical scales, and progress towards the European Extremely Large Telescope. Current results suggest some surprises: the central DM density profile is typically cored, not cusped, with scale sizes never less than a few hundred pc; the central densities are typically 10-20GeV/cc; no galaxy is found with a dark mass halo less massive than $\\sim5.10^7M_{\\odot}$. We are discovering many more dSphs, which we are analysing to test the generality of these results. The European Extremely Large Telescope Design Study is going forward well, supported by an outstanding scientific case, and founded on detailed industrial studies of the technological requirements.
Less is more: regularization perspectives on large scale machine learning
CERN. Geneva
2017-01-01
Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.
Institute of Scientific and Technical Information of China (English)
侯婷婷; 娄素华; 张滋华; 吴耀武
2012-01-01
针对风电基地风电外送的形势,提出了一种风电汇聚外送配套火电容量优化方法。针对风电的随机性,定义了输电通道的持续STC曲线,来分析输电通道输送风电后火电可用容量空间的特性。在此基础上,建立了风电外送配套火电容量优化模型,模型考虑了输电线路、配套火电的费用及输送电量收益,在风电优先外送的前提下,充分利用输电通道,使得经济效益最大化,并采用两层优化策略对模型进行求解。应用本文模型对一个算例系统进行了计算分析,并对电价和火电煤价对结果的影响进行了分析,结果证明了所提方法的正确性和有效性。%ince wind power is explored on a large scale and in a highly centralized way these years,and usually wind bases are inconsistent with load in geographic region,so transmitting wind power through high-voltage transmission line will be an inevitable trend.In this new situation,the paper presents an optimal methodology for corollary thermal sources transmitted with wind power together for wind power’s variability and low energy density.For the random nature of wind power,the duration curve of spare capacity of transmission line(STC) which can be used to transmit thermal power is introduced to illustrate characteristics of capacity for thermal power after transmitting wind power.Based on the duration curve of STC,the model for optimizing the capacity of corollary thermal sources is proposed,which takes into account transmission line costs,thermal sources costs and benefit of electric power transmitted,and the objective function being maximized is the total benefits.The model can be solved by a two-stage optimal strategy.The case studies are carried out for a system,where effects of coal price and electricity price on the optimal schemes is also studied,and the results verify the effectiveness of the presented method.
The Large Scale Organization of Turbulent Channels
del Alamo, Juan C
2013-01-01
We have investigated the organization and dynamics of the large turbulent structures that develop in the logarithmic and outer layers of high-Reynolds-number wall flows. These structures have sizes comparable to the flow thickness and contain most of the turbulent kinetic energy. They produce a substantial fraction of the skin friction and play a key role in turbulent transport. In spite of their significance, there is much less information about the large structures far from the wall than about the small ones of the near-wall region. The main reason for this is the joint requirements of large measurement records and high Reynolds numbers for their experimental analysis. Their theoretical analysis has been hampered by the lack of succesful models for their interaction with the background small-scale turbulence.
RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS
Directory of Open Access Journals (Sweden)
Paweł Kozaczyk
2016-09-01
Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.
Supporting large-scale computational science
Energy Technology Data Exchange (ETDEWEB)
Musick, R
1998-10-01
A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.
The Cosmology Large Angular Scale Surveyor
Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; Marriage, Tobias; McMahon, Jeff; Miller, Nathan; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen
2017-01-01
The Cosmology Large Angular Scale Surveryor (CLASS) is a ground based telescope array designed to measure the large-angular scale polarization signal of the Cosmic Microwave Background (CMB). The large-angular scale CMB polarization measurement is essential for a precise determination of the optical depth to reionization (from the E-mode polarization) and a characterization of inflation from the predicted polarization pattern imprinted on the CMB by gravitational waves in the early universe (from the B-mode polarization). CLASS will characterize the primordial tensor-to-scalar ratio, r, to 0.01 (95% CL).CLASS is uniquely designed to be sensitive to the primordial B-mode signal across the entire range of angular scales where it could possibly dominate over the lensing signal that converts E-modes to B-modes while also making multi-frequency observations both high and low of the frequency where the CMB-to-foreground signal ratio is at its maximum. The design enables CLASS to make a definitive cosmic-variance-limited measurement of the optical depth to scattering from reionization.CLASS is an array of 4 telescopes operating at approximately 40, 90, 150, and 220 GHz. CLASS is located high in the Andes mountains in the Atacama Desert of northern Chile. The location of the CLASS site at high altitude near the equator minimizes atmospheric emission while allowing for daily mapping of ~70% of the sky.A rapid front end Variable-delay Polarization Modulator (VPM) and low noise Transition Edge Sensor (TES) detectors allow for a high sensitivity and low systematic error mapping of the CMB polarization at large angular scales. The VPM, detectors and their coupling structures were all uniquely designed and built for CLASS.We present here an overview of the CLASS scientific strategy, instrument design, and current progress. Particular attention is given to the development and status of the Q-band receiver currently surveying the sky from the Atacama Desert and the development of
Spacecraft Component Adaptive Layout Environment (SCALE): An efficient optimization tool
Fakoor, Mahdi; Ghoreishi, Seyed Mohammad Navid; Sabaghzadeh, Hossein
2016-11-01
For finding the optimum layout of spacecraft subsystems, important factors such as the center of gravity, moments of inertia, thermal distribution, natural frequencies, etc. should be taken into account. This large number of effective parameters makes the optimum layout process of spacecraft subsystems complex and time consuming. In this paper, an automatic tool, based on multi-objective optimization methods, is proposed for a three dimensional layout of spacecraft subsystems. In this regard, an efficient Spacecraft Component Adaptive Layout Environment (SCALE) is produced by integration of some modeling, FEM, and optimization software. SCALE automatically provides optimal solutions for a three dimensional layout of spacecraft subsystems with considering important constraints such as center of gravity, moment of inertia, thermal distribution, natural frequencies and structural strength. In order to show the superiority and efficiency of SCALE, layout of a telecommunication spacecraft and a remote sensing spacecraft are performed. The results show that, the objective functions values for obtained layouts by using SCALE are in a much better condition than traditional one i.e. Reference Baseline Solution (RBS) which is proposed by the engineering system team. This indicates the good performance and ability of SCALE for finding the optimal layout of spacecraft subsystems.
The Cosmology Large Angular Scale Surveyor (CLASS)
Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward
2016-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).
Cold flows and large scale tides
van de Weygaert, R.; Hoffman, Y.
1999-01-01
Within the context of the general cosmological setting it has remained puzzling that the local Universe is a relatively cold environment, in the sense of small-scale peculiar velocities being relatively small. Indeed, it has since long figured as an important argument for the Universe having a low Ω, or if the Universe were to have a high Ω for the existence of a substantial bias between the galaxy and the matter distribution. Here we investigate the dynamical impact of neighbouring matter concentrations on local small-scale characteristics of cosmic flows. While regions where huge nearby matter clumps represent a dominating component in the local dynamics and kinematics may experience a faster collapse on behalf of the corresponding tidal influence, the latter will also slow down or even prevent a thorough mixing and virialization of the collapsing region. By means of N-body simulations starting from constrained realizations of regions of modest density surrounded by more pronounced massive structures, we have explored the extent to which the large scale tidal fields may indeed suppress the `heating' of the small-scale cosmic velocities. Amongst others we quantify the resulting cosmic flows through the cosmic Mach number. This allows us to draw conclusions about the validity of estimates of global cosmological parameters from local cosmic phenomena and the necessity to take into account the structure and distribution of mass in the local Universe.
Large-Scale Quasi-geostrophic Magnetohydrodynamics
Balk, Alexander M.
2014-12-01
We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the "shallow water" beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.
Large Scale Quasi-geostrophic Magnetohydrodynamics
Balk, Alexander M
2014-01-01
We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the "shallow water" beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra invariant. Its presence is shown to imply energy accumulation ...
Clumps in large scale relativistic jets
Tavecchio, F; Celotti, A
2003-01-01
The relatively intense X-ray emission from large scale (tens to hundreds kpc) jets discovered with Chandra likely implies that jets (at least in powerful quasars) are still relativistic at that distances from the active nucleus. In this case the emission is due to Compton scattering off seed photons provided by the Cosmic Microwave Background, and this on one hand permits to have magnetic fields close to equipartition with the emitting particles, and on the other hand minimizes the requirements about the total power carried by the jet. The emission comes from compact (kpc scale) knots, and we here investigate what we can predict about the possible emission between the bright knots. This is motivated by the fact that bulk relativistic motion makes Compton scattering off the CMB photons efficient even when electrons are cold or mildly relativistic in the comoving frame. This implies relatively long cooling times, dominated by adiabatic losses. Therefore the relativistically moving plasma can emit, by Compton sc...
Conformal Anomaly and Large Scale Gravitational Coupling
Salehi, H
2000-01-01
We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.
Large Scale Quantum Simulations of Nuclear Pasta
Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian
2016-03-01
Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).
Large scale wind power penetration in Denmark
DEFF Research Database (Denmark)
Karnøe, Peter
2013-01-01
he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation of "net...... expertise evolves and contributes to the normalization and large-scale penetration of wind power in the electricity generating system. The analysis teaches us how technological paths become locked-in, but also indicates keys for locking them out....
Large scale phononic metamaterials for seismic isolation
Energy Technology Data Exchange (ETDEWEB)
Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)
2015-08-14
In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.
Hiearchical Engine for Large Scale Infrastructure Simulation
Energy Technology Data Exchange (ETDEWEB)
2017-03-15
HELICS ls a new open-source, cyber-physlcal-energy co-simulation framework for electric power systems. HELICS Is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features Include cross platform operating system support, the integration of both eventdrlven (e.g., packetlzed communication) and time-series (e.g.,power flow) simulations, and the ability to co-Iterate among federates to ensure physical model convergence at each time step.
Accelerated large-scale multiple sequence alignment
Directory of Open Access Journals (Sweden)
Lloyd Scott
2011-12-01
Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.
Large-scale ATLAS production on EGEE
Espinal, X.; Campana, S.; Walker, R.
2008-07-01
In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.
Large-scale ATLAS production on EGEE
Espinal, X; Walker, R
2008-01-01
In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.
Large-scale ATLAS production on EGEE
Energy Technology Data Exchange (ETDEWEB)
Espinal, X [PIC - Port d' Informacio cientifica, Universitat Autonoma de Barcelona, Edifici D 08193 Bellaterra, Barcelona (Spain); Campana, S [CERN, European Laboratory for Particle Physics, Rue de Geneve 23 CH 1211 Geneva (Switzerland); Walker, R [TRIUMF, Tri - University Meson Facility, 4004 Wesbrook Mall Vancouver, BC (Canada)], E-mail: espinal@ifae.es
2008-07-15
In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.
Analysis using large-scale ringing data
Directory of Open Access Journals (Sweden)
Baillie, S. R.
2004-06-01
Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES
Local Law of Addition of Random Matrices on Optimal Scale
Bao, Zhigang; Erdős, László; Schnelli, Kevin
2016-11-01
The eigenvalue distribution of the sum of two large Hermitian matrices, when one of them is conjugated by a Haar distributed unitary matrix, is asymptotically given by the free convolution of their spectral distributions. We prove that this convergence also holds locally in the bulk of the spectrum, down to the optimal scales larger than the eigenvalue spacing. The corresponding eigenvectors are fully delocalized. Similar results hold for the sum of two real symmetric matrices, when one is conjugated by Haar orthogonal matrix.
Internationalization Measures in Large Scale Research Projects
Soeding, Emanuel; Smith, Nancy
2017-04-01
Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.
Large-scale Globally Propagating Coronal Waves
Directory of Open Access Journals (Sweden)
Alexander Warmuth
2015-09-01
Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.
The Cosmology Large Angular Scale Surveyor
Harrington, Kathleen; Ali, Aamir; Appel, John W; Bennett, Charles L; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F; Hubmayr, Johannes; Iuliano, Jeffery; Karakla, John; McMahon, Jeff; Miller, Nathan T; Moseley, Samuel H; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen
2016-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70\\% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad f...
Introducing Large-Scale Innovation in Schools
Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.
2016-08-01
Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.
Fast large-scale reionization simulations
Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelić, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Mellema, Garrelt; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod
2009-02-01
We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter haloes are embedded with sources of radiation whose properties are either based on semi-analytical prescriptions or derived from hydrodynamical simulations. These sources could either be stars or power-law sources with varying spectral indices. Assuming spherical symmetry, ionized bubbles are created around these sources, whose radial ionized fraction and temperature profiles are derived from a catalogue of one-dimensional radiative transfer experiments. In case of overlap of these spheres, photons are conserved by redistributing them around the connected ionized regions corresponding to the spheres. The efficiency with which these maps are created allows us to span the large parameter space typically encountered in reionization simulations. We compare our results with other, more accurate, three-dimensional radiative transfer simulations and find excellent agreement for the redshifts and the spatial scales of interest to upcoming 21-cm experiments. We generate a contiguous observational cube spanning redshift 6 to 12 and use these simulations to study the differences in the reionization histories between stars and quasars. Finally, the signal is convolved with the Low Frequency Array (LOFAR) beam response and its effects are analysed and quantified. Statistics performed on this mock data set shed light on possible observational strategies for LOFAR.
Optimal Length Scale for a Turbulent Dynamo.
Sadek, Mira; Alexakis, Alexandros; Fauve, Stephan
2016-02-19
We demonstrate that there is an optimal forcing length scale for low Prandtl number dynamo flows that can significantly reduce the required energy injection rate. The investigation is based on simulations of the induction equation in a periodic box of size 2πL. The flows considered are the laminar and turbulent ABC flows forced at different forcing wave numbers k_{f}, where the turbulent case is simulated using a subgrid turbulence model. At the smallest allowed forcing wave number k_{f}=k_{min}=1/L the laminar critical magnetic Reynolds number Rm_{c}^{lam} is more than an order of magnitude smaller than the turbulent critical magnetic Reynolds number Rm_{c}^{turb} due to the hindering effect of turbulent fluctuations. We show that this hindering effect is almost suppressed when the forcing wave number k_{f} is increased above an optimum wave number k_{f}L≃4 for which Rm_{c}^{turb} is minimum. At this optimal wave number, Rm_{c}^{turb} is smaller by more than a factor of 10 than the case forced in k_{f}=1. This leads to a reduction of the energy injection rate by 3 orders of magnitude when compared to the case where the system is forced at the largest scales and thus provides a new strategy for the design of a fully turbulent experimental dynamo.
Series Design of Large-Scale NC Machine Tool
Institute of Scientific and Technical Information of China (English)
TANG Zhi
2007-01-01
Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.
In the fast lane: large-scale bacterial genome engineering.
Fehér, Tamás; Burland, Valerie; Pósfai, György
2012-07-31
The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.
Large-Scale Astrophysical Visualization on Smartphones
Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.
2011-07-01
Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.
Large-scale parametric survival analysis.
Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S
2013-10-15
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.
The Large Scale Structure: Polarization Aspects
Indian Academy of Sciences (India)
R. F. Pizzo
2011-12-01
Polarized radio emission is detected at various scales in the Universe. In this document, I will briefly review our knowledge on polarized radio sources in galaxy clusters and at their outskirts, emphasizing the crucial information provided by the polarized signal on the origin and evolution of such sources. Successively, I will focus on Abell 2255, which is known in the literature as the first cluster for which filamentary polarized emission associated with the radio halo has been detected. By using RM synthesis on our multi-wavelength WSRT observations, we studied the 3-dimensional geometry of the cluster, unveiling the nature of the polarized filaments at the borders of the central radio halo. Our analysis points out that these structures are relics lying at large distance from the cluster center.
Curvature constraints from Large Scale Structure
Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien
2016-01-01
We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...
Large-Scale Tides in General Relativity
Ip, Hiu Yan
2016-01-01
Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...
Large scale water lens for solar concentration.
Mondol, A S; Vogel, B; Bastian, G
2015-06-01
Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.
Constructing sites on a large scale
DEFF Research Database (Denmark)
Braae, Ellen Marie; Tietjen, Anne
2011-01-01
for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...... discusses new design (education) methods based on a relational concept of urban sites and design processes. Within this logic site survey is not simply a pre-design activity nor is it a question of comprehensive analysis. Site survey is an integrated part of the design process. By means of active site...
Local and Regional Impacts of Large Scale Wind Energy Deployment
Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.
2010-12-01
resources and upscaling large scale wind farm impact on local and regional climate. It will bridge localized and larger scale interactions of renewable energy generation with energy resource and grid management system control. By 2030, when 20 percent wind energy penetration is planned and exascale computing resources have become commonplace, we envision such a system spanning the entire mesoscale to sub-millimeter range of scales to provide a real-time computational and systems control capability to optimize renewable based generating and grid distribution for efficiency and with minimizing environmental impact.
Large scale probabilistic available bandwidth estimation
Thouin, Frederic; Rabbat, Michael
2010-01-01
The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...
A visualization framework for large-scale virtual astronomy
Fu, Chi-Wing
Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.
Institute of Scientific and Technical Information of China (English)
田建伟; 胡兆光; 吴俊勇; 周景宏
2011-01-01
In order to effectively deal with energy crisis and climate change, power system should integrate renewable energies as much as possible. The uncertainties of wind power and hydro power are investigated, and a long distance with large scale grid-connected wind-hydro complementation system is developed based on the complementation characteristics between wind energy resource and water energy resource. Under certain amount of wind energy resource and water energy resource condition, optimal wind-hydro complementation dispatching model is developed and the chaos differential evolution algorithm is used to solve the problem. Simulation on modified IEEE 30-bus system demonstrates that long distance wind-hydro complementation system can mitigate the output fluctuation of wind power,and greatly reduce fuel expense of thermal units, as well as the energy-saving and emission-reduction effect is significant.%为了有效应对能源危机和气候变化,电力系统应尽可能的吸纳可再生能源并网发电.本文分析了风力发电和水力发电的不确定性,根据风能资源和水能资源的互补特点构建了远距离大容量风水互补系统.在一定的风能资源和水能资源条件下设计了风水互补系统的优化调度模型,并采用混沌微分进化算法求解.修改后的IEEE-30节点算例结果表明:远距离风水互补系统可以平抑风电出力波动,大大减少系统火电机组燃料耗费,节能减排效果显著.
Using Large Scale Structure to test Multifield Inflation
Ferraro, Simone
2014-01-01
Primordial non-Gaussianity of local type is known to produce a scale-dependent contribution to the galaxy bias. Several classes of multi-field inflationary models predict non-Gaussian bias which is stochastic, in the sense that dark matter and halos don't trace each other perfectly on large scales. In this work, we forecast the ability of next-generation Large Scale Structure surveys to constrain common types of primordial non-Gaussianity like $f_{NL}$, $g_{NL}$ and $\\tau_{NL}$ using halo bias, including stochastic contributions. We provide fitting functions for statistical errors on these parameters which can be used for rapid forecasting or survey optimization. A next-generation survey with volume $V = 25 h^{-3}$Mpc$^3$, median redshift $z = 0.7$ and mean bias $b_g = 2.5$, can achieve $\\sigma(f_{NL}) = 6$, $\\sigma(g_{NL}) = 10^5$ and $\\sigma(\\tau_{NL}) = 10^3$ if no mass information is available. If halo masses are available, we show that optimally weighting the halo field in order to reduce sample variance...
CLASS: The Cosmology Large Angular Scale Surveyor
Essinger-Hileman, Thomas; Amiri, Mandana; Appel, John W; Araujo, Derek; Bennett, Charles L; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Harrington, Kathleen; Hilton, Gene; Hinshaw, Gary F; Huang, Caroline; Irwin, Kent; Jones, Glenn; Karakla, John; Kogut, Alan J; Larson, David; Limon, Michele; Lowry, Lindsay; Marriage, Tobias; Mehrle, Nicholas; Miller, Amber D; Miller, Nathan; Moseley, Samuel H; Novak, Giles; Reintsema, Carl; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wagner, Emily; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen
2014-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravita-tional-wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70\\% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low $\\ell$. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of $r=0.01$ and make a cosmi...
CLASS: The Cosmology Large Angular Scale Surveyor
Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward
2014-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Large-scale wind turbine structures
Spera, David A.
1988-01-01
The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.
Large-scale screens of metagenomic libraries.
Pham, Vinh D; Palden, Tsultrim; DeLong, Edward F
2007-01-01
Metagenomic libraries archive large fragments of contiguous genomic sequences from microorganisms without requiring prior cultivation. Generating a streamlined procedure for creating and screening metagenomic libraries is therefore useful for efficient high-throughput investigations into the genetic and metabolic properties of uncultured microbial assemblages. Here, key protocols are presented on video, which we propose is the most useful format for accurately describing a long process that alternately depends on robotic instrumentation and (human) manual interventions. First, we employed robotics to spot library clones onto high-density macroarray membranes, each of which can contain duplicate colonies from twenty-four 384-well library plates. Automation is essential for this procedure not only for accuracy and speed, but also due to the miniaturization of scale required to fit the large number of library clones into highly dense spatial arrangements. Once generated, we next demonstrated how the macroarray membranes can be screened for genes of interest using modified versions of standard protocols for probe labeling, membrane hybridization, and signal detection. We complemented the visual demonstration of these procedures with detailed written descriptions of the steps involved and the materials required, all of which are available online alongside the video.
Large-Scale Spacecraft Fire Safety Tests
Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde
2014-01-01
An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests
GPU-based large-scale visualization
Hadwiger, Markus
2013-11-19
Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous
Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System
DEFF Research Database (Denmark)
Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul
2015-01-01
Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...... static optimal power flow based approaches, the proposed method considers detailed system dynamics and wind turbine grid code fulfilment while optimizing the allocation of dynamic reactive power sources. We also argue that in large scale wind integrated power systems, i) better utilization of existing...
Order reduction of large-scale linear oscillatory system models
Energy Technology Data Exchange (ETDEWEB)
Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))
1994-02-01
Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.
Bonus algorithm for large scale stochastic nonlinear programming problems
Diwekar, Urmila
2015-01-01
This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...
Striping and Scheduling for Large Scale Multimedia Servers
Institute of Scientific and Technical Information of China (English)
Kyung-Oh Lee; Jun-Ho Park; Yoon-Young Park
2004-01-01
When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced. Several problems in the designing of large-scale multimedia servers are addressed, with the following contributions: (1) a striping scheme is proposed that minimizes the number of seeks and hence maximizes the performance; (2) a simple and efficient mechanism is presented to find the optimal striping unit size as well as the optimal round length, which exploits both the characteristics of VBR streams and the situation of resources in the system; and (3) the characteristics and resource requirements of several scheduling schemes are investigated in order to obtain a clear indication as to which scheme shows the best performance in realtime multimedia servicing. Based on our analysis and experimental results, the CSCAN scheme outperforms the other schemes.
Institute of Scientific and Technical Information of China (English)
任铭
2015-01-01
传统的过程控制和作业调度方法采用基于多线程集群聚类的任务调度方法,对多用户、多任务的大型自动化过程控制的调度性能不好.提出基于主特征支配集分簇提取的大型自动化过程控制流程优化调度模型.构建大型自动化过程控制模型,进行优化控制目标函数构建,实现控制流程的优化调度模型改进,最后通过仿真实验进行了性能验证.仿真结果表明,该算法能优化自动化过程控制流程,在提高生产效率,优化工业自动化过程控制方面具有重要应用价值.%Traditional process control and job scheduling method based on multi-threading set clustering task scheduling method, large automation of users, the task scheduling performance of process control is bad. Put forward based on the char-acteristics of dominating sets clumps and extraction of large automation process control process optimization scheduling model. Building large automation process control model for optimal control objective function building, to achieve the opti-mal scheduling model of control process improvements, the performance verification by simulation experiment. The simula-tion results show that the algorithm can optimize the automation process control process, to improve the production effi-ciency, optimize the industrial automation process control has important application value.
Costing Generated Runtime Execution Plans for Large-Scale Machine Learning Programs
Boehm, Matthias
2015-01-01
Declarative large-scale machine learning (ML) aims at the specification of ML algorithms in a high-level language and automatic generation of hybrid runtime execution plans ranging from single node, in-memory computations to distributed computations on MapReduce (MR) or similar frameworks like Spark. The compilation of large-scale ML programs exhibits many opportunities for automatic optimization. Advanced cost-based optimization techniques require---as a fundamental precondition---an accurat...
Reliability assessment for components of large scale photovoltaic systems
Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar
2014-10-01
Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.
Large-scale autostereoscopic outdoor display
Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich
2013-03-01
State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.
Management of large-scale multimedia conferencing
Cidon, Israel; Nachum, Youval
1998-12-01
The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.
Large-scale tides in general relativity
Ip, Hiu Yan; Schmidt, Fabian
2017-02-01
Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.
Food appropriation through large scale land acquisitions
Rulli, Maria Cristina; D'Odorico, Paolo
2014-05-01
The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.
Large-scale clustering of cosmic voids
Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent
2014-11-01
We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.
Large scale digital atlases in neuroscience
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Large-scale assembly of colloidal particles
Yang, Hongta
This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the
Topology Optimized Architectures with Programmable Poisson's Ratio over Large Deformations
DEFF Research Database (Denmark)
Clausen, Anders; Wang, Fengwen; Jensen, Jakob Søndergaard
2015-01-01
Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more.......Topology optimized architectures are designed and printed with programmable Poisson's ratios ranging from -0.8 to 0.8 over large deformations of 20% or more....
Developing Large-Scale Bayesian Networks by Composition
National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...
Distributed large-scale dimensional metrology new insights
Franceschini, Fiorenzo; Maisano, Domenico
2011-01-01
Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes
Institute of Scientific and Technical Information of China (English)
娄素华; 胡斌; 吴耀武; 卢斯煜
2014-01-01
Photovoltaic generation does not consume fossil fuels and is free from carbon emission.Developing photovoltaic generation is an effective way of developing a low carbon electricity system.However,with its considerable randomness and intermittence,it brings new uncertainty to power system dispatch.This paper introduces the carbon trading mechanism to power system dispatch based on the concept of low carbon economy.A probability distribution model of solar irradiance and photovoltaic generation output is developed,and scenario reduction technology based on Kantorovich distance is used to reduce the photovoltaic generation output scenarios.On this basis,a model for power system optimal dispatch integrated with large scale photovoltaic generation and considering carbon trading cost is proposed.It has the advantage of balancing the economy, low carbon emission,and reliability of power system operation.A ten-unit test system is simulated via the proposed model to prove its rationality and effectiveness.%光伏发电不消耗化石能源，无碳排放，是电力工业低碳化的有效途径。光伏发电出力具有很大的随机性与间歇性，大容量接入给电力调度带来了新的不确定因素。文中基于低碳经济理念，将碳交易机制引入电力系统经济调度；建立了太阳辐照度和光伏发电出力的概率分布模型，并采用基于 Kantorovich 距离的场景削减技术对光伏发电出力场景进行有效削减；在此基础上构建了考虑大规模光伏电源接入和 CO 2排放经济价值的电力系统优化调度模型，模型兼顾了系统运行的经济性、低碳性和可靠性。利用该模型对10机系统进行了算例仿真，结果验证了所提出模型的合理性和有效性。
Large Scale, High Resolution, Mantle Dynamics Modeling
Geenen, T.; Berg, A. V.; Spakman, W.
2007-12-01
To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D
Large Scale Flame Spread Environmental Characterization Testing
Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.
2013-01-01
Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation
The predictability of large-scale wind-driven flows
Directory of Open Access Journals (Sweden)
A. Mahadevan
2001-01-01
Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy
大型风电机组的功率曲线自寻优控制策略%Self-optimizing Power Curve Control Strategy for Large Scale Wind Turbine
Institute of Scientific and Technical Information of China (English)
夏安俊; 徐浩; 胡书举; 许洪华
2012-01-01
The power signal feedback is usually used to track the maximum power point for large scale wind turbines control when the wind speed is below the rated speed.However,the maximum power curve of wind turbine is generally obtained from field experiments,which is hard to assure accuracy.Consequently,we proposed a strategy of self-optimizing power curve control based on tracking-differentiator to improve the efficiency of wind turbine operation in low wind.The differential signals of rotor speed and mechanical power are extracted by the trackingdifferentiators and used to determine the position relation between the actual power curve and the maximum power point.A three-dimensional fuzzy controller was designed for the real-time regulation of the coefficient of power curve.Based on a 2MW doubly-fed wind power generation system,the self-optimizing power curve control strategy was simulated in the Bladed simulation environment.Simulation results show that the proposed method has comparatively small dependence on the parameters of wind turbine such as the power characteristic or torque characteristic.The method can regulate the coefficient of the power curve real-timely and adjust the actual power curve to the position of maximum power curve effectively when wind speed changes,verifying the correctness and feasibility of the control strategy.%大型风电机组在额定风速以下一般采用最大功率曲线法进行机组的最大功率跟踪控制,但机组的最大功率曲线一般通过实验获得,很难保证精度。为优化风电机组在低风速区域对风能的利用率,提出了基于微分跟踪器的功率曲线自寻优控制策略。采用微分跟踪器提取出机组转速和机械功率的微分值,并由此判断实际功率曲线与最大功率点之间的位置关系,然后采用三维模糊控制器对功率曲线的系数进行实时调整。以2MW双馈风力发电系统为基础,在Bladed仿真环境中对自寻优控制策略进行了仿真
Synchronization of coupled large-scale Boolean networks
Energy Technology Data Exchange (ETDEWEB)
Li, Fangfei, E-mail: li-fangfei@163.com [Department of Mathematics, East China University of Science and Technology, No. 130, Meilong Road, Shanghai, Shanghai 200237 (China)
2014-03-15
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Synchronization of coupled large-scale Boolean networks
Li, Fangfei
2014-03-01
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Optimal Scale Edge Detection Utilizing Noise within Images
Directory of Open Access Journals (Sweden)
Adnan Khashman
2003-04-01
Full Text Available Edge detection techniques have common problems that include poor edge detection in low contrast images, speed of recognition and high computational cost. An efficient solution to the edge detection of objects in low to high contrast images is scale space analysis. However, this approach is time consuming and computationally expensive. These expenses can be marginally reduced if an optimal scale is found in scale space edge detection. This paper presents a new approach to detecting objects within images using noise within the images. The novel idea is based on selecting one optimal scale for the entire image at which scale space edge detection can be applied. The selection of an ideal scale is based on the hypothesis that "the optimal edge detection scale (ideal scale depends on the noise within an image". This paper aims at providing the experimental evidence on the relationship between the optimal scale and the noise within images.
Multitree Algorithms for Large-Scale Astrostatistics
March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.
2012-03-01
Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly
Thermodynamics constrains allometric scaling of optimal development time in insects.
Directory of Open Access Journals (Sweden)
Michael E Dillon
Full Text Available Development time is a critical life-history trait that has profound effects on organism fitness and on population growth rates. For ectotherms, development time is strongly influenced by temperature and is predicted to scale with body mass to the quarter power based on 1 the ontogenetic growth model of the metabolic theory of ecology which describes a bioenergetic balance between tissue maintenance and growth given the scaling relationship between metabolism and body size, and 2 numerous studies, primarily of vertebrate endotherms, that largely support this prediction. However, few studies have investigated the allometry of development time among invertebrates, including insects. Abundant data on development of diverse insects provides an ideal opportunity to better understand the scaling of development time in this ecologically and economically important group. Insects develop more quickly at warmer temperatures until reaching a minimum development time at some optimal temperature, after which development slows. We evaluated the allometry of insect development time by compiling estimates of minimum development time and optimal developmental temperature for 361 insect species from 16 orders with body mass varying over nearly 6 orders of magnitude. Allometric scaling exponents varied with the statistical approach: standardized major axis regression supported the predicted quarter-power scaling relationship, but ordinary and phylogenetic generalized least squares did not. Regardless of the statistical approach, body size alone explained less than 28% of the variation in development time. Models that also included optimal temperature explained over 50% of the variation in development time. Warm-adapted insects developed more quickly, regardless of body size, supporting the "hotter is better" hypothesis that posits that ectotherms have a limited ability to evolutionarily compensate for the depressing effects of low temperatures on rates of
Institute of Scientific and Technical Information of China (English)
Qin Ni; Ch. Zillober; K. Schittkowski
2005-01-01
In this paper, we describe a method to solve large-scale structural optimization problems by sequential convex programming (SCP). A predictor-corrector interior point method is applied to solve the strictly convex subproblems. The SCP algorithm and the topology optimization approach are introduced. Especially, different strategies to solve certain linear systems of equations are analyzed. Numerical results are presented to show the efficiency of the proposed method for solving topology optimization problems and to compare different variants.
Large scale dynamics of protoplanetary discs
BÃ©thune, William
2017-08-01
Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from
Fonseca, Ricardo A; Fiúza, Frederico; Davidson, Asher; Tsung, Frank S; Mori, Warren B; Silva, Luís O
2013-01-01
A new generation of laser wakefield accelerators, supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modeling for further understanding of the underlying physics and identification of optimal regimes, but large scale modeling of these scenarios is computationally heavy and requires efficient use of state-of-the-art Petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed / shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modeling of LWFA, demonstrating speedups of over 1 order of magni...
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Large scale stochastic spatio-temporal modelling with PCRaster
Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.
2013-04-01
PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use
Large scale structure from viscous dark matter
Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim
2015-11-01
Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.
Optimized Design and Discussion on Middle and Large CANDLE Reactors
Directory of Open Access Journals (Sweden)
Xiaoming Chai
2012-08-01
Full Text Available CANDLE (Constant Axial shape of Neutron flux, nuclide number densities and power shape During Life of Energy producing reactor reactors have been intensively researched in the last decades [1–6]. Research shows that this kind of reactor is highly economical, safe and efficiently saves resources, thus extending large scale fission nuclear energy utilization for thousands of years, benefitting the whole of society. For many developing countries with a large population and high energy demands, such as China and India, middle (1000 MWth and large (2000 MWth CANDLE fast reactors are obviously more suitable than small reactors [2]. In this paper, the middle and large CANDLE reactors are investigated with U-Pu and combined ThU-UPu fuel cycles, aiming to utilize the abundant thorium resources and optimize the radial power distribution. To achieve these design purposes, the present designs were utilized, simply dividing the core into two fuel regions in the radial direction. The less active fuel, such as thorium or natural uranium, was loaded in the inner core region and the fuel with low-level enrichment, e.g. 2.0% enriched uranium, was loaded in the outer core region. By this simple core configuration and fuel setting, rather than using a complicated method, we can obtain the desired middle and large CANDLE fast cores with reasonable core geometry and thermal hydraulic parameters that perform safely and economically; as is to be expected from CANDLE. To assist in understanding the CANDLE reactor’s attributes, analysis and discussion of the calculation results achieved are provided.
Robust large-scale parallel nonlinear solvers for simulations.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Comparison Between Overtopping Discharge in Small and Large Scale Models
DEFF Research Database (Denmark)
Helgason, Einar; Burcharth, Hans F.
2006-01-01
small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....
SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS
Fiscaletti, Daniele
2015-08-23
The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.
On the scaling of small-scale jet noise to large scale
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
Fast large-scale reionization simulations
Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelic, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod; Mellema, G.
2009-01-01
We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter h
Large scale parallel document image processing
van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K
2008-01-01
Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The probl
Fast large-scale reionization simulations
Thomas, Rajat M.; Zaroubi, Saleem; Ciardi, Benedetta; Pawlik, Andreas H.; Labropoulos, Panagiotis; Jelic, Vibor; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Harker, Geraint J. A.; Koopmans, Leon V. E.; Pandey, V. N.; Schaye, Joop; Yatawatta, Sarod; Mellema, G.
2009-01-01
We present an efficient method to generate large simulations of the epoch of reionization without the need for a full three-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21-cm emission from neutral hydrogen. Dark matter
Large scale parallel document image processing
van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K
2008-01-01
Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The
Large scale structure from viscous dark matter
Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim
2015-01-01
Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...
Implementation of efficient sensitivity analysis for optimization of large structures
Umaretiya, J. R.; Kamil, H.
1990-01-01
The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.
Parallel cluster labeling for large-scale Monte Carlo simulations
Flanigan, M; Flanigan, M; Tamayo, P
1995-01-01
We present an optimized version of a cluster labeling algorithm previously introduced by the authors. This algorithm is well suited for large-scale Monte Carlo simulations of spin models using cluster dynamics on parallel computers with large numbers of processors. The algorithm divides physical space into rectangular cells which are assigned to processors and combines a serial local labeling procedure with a relaxation process across nearest-neighbor processors. By controlling overhead and reducing inter-processor communication this method attains good computational speed-up and efficiency. Large systems of up to 65536 X 65536 spins have been simulated at updating speeds of 11 nanosecs/site (90.7 million spin updates/sec) using state-of-the-art supercomputers. In the second part of the article we use the cluster algorithm to study the relaxation of magnetization and energy on large Ising models using Swendsen-Wang dynamics. We found evidence that exponential and power law factors are present in the relaxatio...
Statistical equilibria of large scales in dissipative hydrodynamic turbulence
Dallas, Vassilios; Alexakis, Alexandros
2015-01-01
We present a numerical study of the statistical properties of three-dimensional dissipative turbulent flows at scales larger than the forcing scale. Our results indicate that the large scale flow can be described to a large degree by the truncated Euler equations with the predictions of the zero flux solutions given by absolute equilibrium theory, both for helical and non-helical flows. Thus, the functional shape of the large scale spectra can be predicted provided that scales sufficiently larger than the forcing length scale but also sufficiently smaller than the box size are examined. Deviations from the predictions of absolute equilibrium are discussed.
The fractal octahedron network of the large scale structure
Battaner, E
1998-01-01
In a previous article, we have proposed that the large scale structure network generated by large scale magnetic fields could consist of a network of octahedra only contacting at their vertexes. Assuming such a network could arise at different scales producing a fractal geometry, we study here its properties, and in particular how a sub-octahedron network can be inserted within an octahedron of the large network. We deduce that the scale of the fractal structure would range from $\\approx$100 Mpc, i.e. the scale of the deepest surveys, down to about 10 Mpc, as other smaller scale magnetic fields were probably destroyed in the radiation dominated Universe.
Optimizing snake locomotion in the plane. II. Large transverse friction
Alben, Silas
2013-01-01
We determine analytically the form of optimal snake locomotion when the coefficient of transverse friction is large, the typical regime for biological and robotic snakes. We find that the optimal snake motion is a retrograde traveling wave, with a wave amplitude that decays as the -1/4 power of the coefficient of transverse friction. This result agrees well with our numerical computations.
Electrodialysis system for large-scale enantiomer separation
Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.
2001-01-01
In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system. I
Electrodialysis system for large-scale enantiomer separation
Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.
2001-01-01
In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system.
Large Scale Experiments on Spacecraft Fire Safety
DEFF Research Database (Denmark)
Urban, David L.; Ruff, Gary A.; Minster, Olivier
2012-01-01
Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due...
A mini review: photobioreactors for large scale algal cultivation.
Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong
2015-09-01
Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass.
Including investment risk in large-scale power market models
DEFF Research Database (Denmark)
Lemming, Jørgen Kjærgaard; Meibom, P.
2003-01-01
can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...
Large-scale structure of time evolving citation networks
Leicht, E. A.; Clarkson, G.; Shedden, K.; Newman, M. E. J.
2007-09-01
In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.
Large-area landslide susceptibility with optimized slope-units
Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto
2017-04-01
A Slope-Unit (SU) is a type of morphological terrain unit bounded by drainage and divide lines that maximize the within-unit homogeneity and the between-unit heterogeneity across distinct physical and geographical boundaries [1]. Compared to other terrain subdivisions, SU are morphological terrain unit well related to the natural (i.e., geological, geomorphological, hydrological) processes that shape and characterize natural slopes. This makes SU easily recognizable in the field or in topographic base maps, and well suited for environmental and geomorphological analysis, in particular for landslide susceptibility (LS) modelling. An optimal subdivision of an area into a set of SU depends on multiple factors: size and complexity of the study area, quality and resolution of the available terrain elevation data, purpose of the terrain subdivision, scale and resolution of the phenomena for which SU are delineated. We use the recently developed r.slopeunits software [2,3] for the automatic, parametric delineation of SU within the open source GRASS GIS based on terrain elevation data and a small number of user-defined parameters. The software provides subdivisions consisting of SU with different shapes and sizes, as a function of the input parameters. In this work, we describe a procedure for the optimal selection of the user parameters through the production of a large number of realizations of the LS model. We tested the software and the optimization procedure in a 2,000 km2 area in Umbria, Central Italy. For LS zonation we adopt a logistic regression model implemented in an well-known software [4,5], using about 50 independent variables. To select the optimal SU partition for LS zonation, we want to define a metric which is able to quantify simultaneously: (i) slope-unit internal homogeneity (ii) slope-unit external heterogeneity (iii) landslide susceptibility model performance. To this end, we define a comprehensive objective function S, as the product of three
Large scale PV plants - also in Denmark. Project report
Energy Technology Data Exchange (ETDEWEB)
Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))
2011-04-15
Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and
Topology optimization for nano-scale heat transfer
DEFF Research Database (Denmark)
Evgrafov, Anton; Maute, Kurt; Yang, Ronggui
2009-01-01
We consider the problem of optimal design of nano-scale heat conducting systems using topology optimization techniques. At such small scales the empirical Fourier's law of heat conduction no longer captures the underlying physical phenomena because the mean-free path of the heat carriers, phonons...
Institute of Scientific and Technical Information of China (English)
何东博; 贾爱林; 冀光; 位云生; 唐海发
2013-01-01
Sulige gas field is a typical tight sand gas field in China. Well type and pattern optimization is the key technology to improve single well estimated reserves and recovery factor and to achieve effective field development. In view of the large area, low abundance and high heterogeneity of Sulige gas field, a series of techniques have been developed including hierarchical description for the reservoir architecture of large composite sand bodies and well spacing optimization, well pattern optimization, design and optimization for horizontal trajectory and deliverability evaluation for different types of gas wells. These technologies provide most important technical supports for the increases of class ⅠandⅡ wells proportion to 75%-80% with recovery factor enhanced by more than 35% and for the industrial application of horizontal drilling. To further improve individual well production and recovery factor, attempts and pilot tests in various well types including side tracking of deficient wells, multilateral horizontal wells, and directional wells, and horizontal well pattern and combined well pattern of various well types should be carried out throughout the development.%苏里格气田是中国致密砂岩气田的典型代表,井型井网技术是其提高单井控制储量和采收率、实现气田规模有效开发的关键技术.针对苏里格气田大面积、低丰度、强非均质性的特征,形成了大型复合砂体分级构型描述与优化布井技术、井型井网优化技术、水平井优化设计技术和不同类型井产能评价技术,为苏里格气田产能建设Ⅰ +Ⅱ类井比例达到75％～80％、预期采收率提高到35％以上以及水平井的规模化应用发挥了重要的技术支撑作用.为进一步提高苏里格气田单井产量和采收率,应继续开展低效井侧钻、多分支水平井、多井底定向井等不同井型,以及水平井井网、多井型组合井网的探索和开发试验.
Energy Technology Data Exchange (ETDEWEB)
Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)
1998-12-31
The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)
Large scale processing of dielectric electroactive polymers
DEFF Research Database (Denmark)
Vudayagiri, Sindhu
of square of films’ thickness. Production of thin elastomer films with microstructures on one or both surfaces is therefore the crucial step in the manufacturing. The manufacture process is still not perfect and further optimization is required. Smart processing techniques are required at Danfoss Polypower...... is sputtered on the microstructured surface of the film. Two such films are laminated to make a single DEAP laminate with two microstructured surfaces. The lamination process introduces two problems: 1) it may entrap air bubbles and dust at the interface which will cause the films to breakdown at the operating...
Minimum length scale in topology optimization by geometric constraints
DEFF Research Database (Denmark)
Zhou, Mingdong; Lazarov, Boyan Stefanov; Wang, Fengwen
2015-01-01
A density-based topology optimization approach is proposed to design structures with strict minimum length scale. The idea is based on using a filtering-threshold topology optimization scheme and computationally cheap geometric constraints. The constraints are defined over the underlying structural...... geometry represented by the filtered and physical fields. Satisfying the constraints leads to a design that possesses user-specified minimum length scale. Conventional topology optimization problems can be augmented with the proposed constraints to achieve minimum length scale on the final design....... No additional finite element analysis is required for the constrained optimization. Several benchmark examples are presented to show the effectiveness of this approach....
Large-scale Motion of Solar Filaments
Indian Academy of Sciences (India)
Pavel Ambrož; Alfred Schroll
2000-09-01
Precise measurements of heliographic position of solar filaments were used for determination of the proper motion of solar filaments on the time-scale of days. The filaments have a tendency to make a shaking or waving of the external structure and to make a general movement of whole filament body, coinciding with the transport of the magnetic flux in the photosphere. The velocity scatter of individual measured points is about one order higher than the accuracy of measurements.
Modified gravity and large scale flows, a review
Mould, Jeremy
2017-02-01
Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.
Metastrategies in large-scale bargaining settings
Hennes, D.; Jong, S. de; Tuyls, K.; Gal, Y.
2015-01-01
This article presents novel methods for representing and analyzing a special class of multiagent bargaining settings that feature multiple players, large action spaces, and a relationship among players' goals, tasks, and resources. We show how to reduce these interactions to a set of bilateral
Large-Scale Organizational Performance Improvement.
Pilotto, Rudy; Young, Jonathan O'Donnell
1999-01-01
Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…
Discrete-time optimal control and games on large intervals
Zaslavski, Alexander J
2017-01-01
Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...
Large scale scientific computing - future directions
Patterson, G. S.
1982-06-01
Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.
GPS for large-scale aerotriangulation
Rogowksi, Jerzy B.
The application of GPS (Global Positioning System) measurements to photogrammetry is presented. The technology of establishment of a GPS network for aerotriangulation as a base for mapping at scales from 1:1000 has been worked out at the Institute of Geodesy and Geodetical Astronomy of the Warsaw University of Technology. This method consists of the design, measurement, and adjustment of this special network. The results of several pilot projects confirm the possibility of improving the aerotriangulation accuracy. A few-centimeter accuracy has been achieved.
Development of large-scale structure in the Universe
Ostriker, J P
1991-01-01
This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.
Optimal counterterrorism and the recruitment effect of large terrorist attacks
DEFF Research Database (Denmark)
Jensen, Thomas
2011-01-01
We analyze a simple dynamic model of the interaction between terrorists and authorities. Our primary aim is to study optimal counterterrorism and its consequences when large terrorist attacks lead to a temporary increase in terrorist recruitment. First, we show that an increase in counterterrorism...... makes it more likely that terrorist cells plan small rather than large attacks and therefore may increase the probability of a successful attack. Analyzing optimal counterterrorism we see that the recruitment effect makes authorities increase the level of counterterrorism after large attacks. Therefore...
Measurement of ionospheric large-scale irregularity
Institute of Scientific and Technical Information of China (English)
韩文焌; 郑怡嘉; 张喜镇
1996-01-01
Based on the observations of a meter-wave aperture synthesis radio telescope,as the scale length of ionospheric irregularity is greatly larger than the baseline length of interferometer,the phase error induced by the output signal of interferometer due to ionosphere is proportional to the baseline length and accordingly the expressions for extracting the information about ionosphere are derived.By using the ray theory and considering that the antenna is always tracking to the radio source in astronomical observation,the wave motion expression of traveling ionospheric disturbance observed in the total electron content is also derived,which is consistent with that obtained from the conception of thin-phase screen;then the Doppler velocity due to antenna tracking is introduced.Finally the inversion analysis for the horizontal phase velocity of TID from observed data is given.
Optimization of MIMO Systems Capacity Using Large Random Matrix Methods
Directory of Open Access Journals (Sweden)
Philippe Loubaton
2012-11-01
Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.
Large Scale Demand Response of Thermostatic Loads
DEFF Research Database (Denmark)
Totu, Luminita Cristiana
This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting...... the temperature limits and other operational constraints, and by using only limited communication, it is possible to make use of the individual thermostat deadband flexibility to step-up or step-down the power consumption of the population as if it were a power plant. The individual thermostatic loads experience...
Enabling High Performance Large Scale Dense Problems through KBLAS
Abdelfattah, Ahmad
2014-05-04
KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.
Large-scale GW software development
Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab
Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.
Goethite Bench-scale and Large-scale Preparation Tests
Energy Technology Data Exchange (ETDEWEB)
Josephson, Gary B.; Westsik, Joseph H.
2011-10-23
The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the
Large Scale CW ECRH Systems: Some considerations
Directory of Open Access Journals (Sweden)
Turkin Y.
2012-09-01
Full Text Available Electron Cyclotron Resonance Heating (ECRH is a key component in the heating arsenal for the next step fusion devices like W7-X and ITER. These devices are equipped with superconducting coils and are designed to operate steady state. ECRH must thus operate in CW-mode with a large flexibility to comply with various physics demands such as plasma start-up, heating and current drive, as well as configurationand MHD - control. The request for many different sophisticated applications results in a growing complexity, which is in conflict with the request for high availability, reliability, and maintainability. ‘Advanced’ ECRH-systems must, therefore, comply with both the complex physics demands and operational robustness and reliability. The W7-X ECRH system is the first CW- facility of an ITER relevant size and is used as a test bed for advanced components. Proposals for future developments are presented together with improvements of gyrotrons, transmission components and launchers.
Carbon dioxide recovery: large scale design trends
Energy Technology Data Exchange (ETDEWEB)
Mariz, C. L.
1998-07-01
Carbon dioxide recovery from flue gas streams for use in enhanced oil recovery were examined, focusing on key design and operating issues and trends that appear promising in reducing plant investment and operating costs associated with this source of carbon dioxide. The emphasis was on conventional processes using chemical solvents, such as the Fluor Daniel ECONAMINE FG{sup S}M process. Developments in new tower packings and solvents and their potential impact on plant and operating costs were reviewed, along with the effects on these costs of the flue gas source. Sample operating and capital recovery cost data is provided for a 1,000 tonne/day plant. This size plant would be one large enough to support an enhanced oil recovery project. 11 refs., 4 figs.
Python for large-scale electrophysiology
Directory of Open Access Journals (Sweden)
Martin A Spacek
2009-01-01
Full Text Available Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54 channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analyzing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (dimstim; one for electrophysiological waveform visualization and spike sorting (spyke; and one for spike train and stimulus analysis (neuropy. All three are open source and available for download (http://swindale.ecc.ubc.ca/code. The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.
Python for large-scale electrophysiology.
Spacek, Martin; Blanche, Tim; Swindale, Nicholas
2008-01-01
Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.
Galaxy Formation and Large Scale Structure
Ellis, R
1999-01-01
Galaxies represent the visible fabric of the Universe and there has been considerable progress recently in both observational and theoretical studies. The underlying goal is to understand the present-day diversity of galaxy forms, masses and luminosities in the context of theories for the growth of structure. Popular models predict the bulk of the galaxy population assembled recently, in apparent agreement with optical and near-infrared observations. However, detailed conclusions rely crucially on the choice of the cosmological parameters. Although the star formation history has been sketched to early times, uncertainties remain, particularly in connecting to the underlying mass assembly rate. I discuss the expected progress in determining the cosmological parameters and address the question of which observations would most accurately check contemporary models for the origin of the Hubble sequence. The new generation of ground-based and future space-based large telescopes, equipped with instrumentation approp...
Large-Scale Pattern Discovery in Music
Bertin-Mahieux, Thierry
This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.
Large-scale portfolios using realized covariance matrix: evidence from the Japanese stock market
Masato Ubukata
2010-01-01
This paper examines effects of realized covariance matrix estimators based on high-frequency data on large-scale minimum-variance equity portfolio optimization. The main results are: (i) the realized covariance matrix estimators yield a lower standard deviation of large-scale portfolio returns than Bayesian shrinkage estimators based on monthly and daily historical returns; (ii) gains to switching to strategies using the realized covariance matrix estimators are higher for an investor with hi...
Decentralised stabilising controllers for a class of large-scale linear systems
Indian Academy of Sciences (India)
B C Jha; K Patralekh; R Singh
2000-12-01
A simple method for computing decentralised stabilising controllers for a class of large-scale (interconnected) linear systems has been developed. Decentralised controls are optimal controls at subsystem level and are generated from the solution of algebraic Riccati equations for decoupled subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order.
Optimal Product Variety, Scale Effects and Growth
de Groot, H.L.F.; Nahuis, R.
1997-01-01
We analyze the social optimality of growth and product variety in a model of endogenous growth. The model contains two sectors, one assembly sector producing a homogenous consumption good, and one intermediate goods sector producing a differentiated input used in the assembly sector. Growth results
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
Irradiation of onions on a large scale
Energy Technology Data Exchange (ETDEWEB)
Kawashima, Koji; Hayashi, Toru; Uozumi, J.; Sugimoto, Toshio; Aoki, Shohei
1984-03-01
A large number of onions of var. Kitamiki and Ohotsuku were irradiated in September followed by storage at 0 deg C or 5 deg C. The onions were shifted from cold-storage facilities to room temperature in mid-March or in mid-April in the following year. Their sprouting, rooting, spoilage characteristics and sugar content were observed during storage at room temperature. Most of the unirradiated onions sprouted either outside or inside bulbs during storage at room temperature, and almost all of the irradiated ones showed small buds with browning inside the bulb in mid-April irrespective of the storage temperature. Rooting and/or expansion of bottom were observed in the unirradiated samples. Although the irradiated materials did not have root, they showed expansion of bottom to some extent. Both the irradiated and unirradiated onions spoiled slightly unless they sprouted, and sprouted onions were easily spoiled. There was no difference in the glucose content between the unirradiated and irradiated onions, but the irradiated ones yielded higher sucrose content when stored at room temperature. Irradiation treatment did not have an obvious effect on the quality of freeze-dried onion slices. (author).
A Large Scale Virtual Gas Sensor Array
Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre
2011-09-01
This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.
Superconducting materials for large scale applications
Energy Technology Data Exchange (ETDEWEB)
Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.
2004-05-06
Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.
Large Scale Flows from Orion-South
Henney, W J; Zapata, L A; Garcia-Diaz, M T; Rodríguez, L F; Robberto, M; Zapata, Luis A.; Garcia-Diaz, Ma. T.; Rodriguez, Luis F.; Robberto, Massimo
2007-01-01
Multiple optical outflows are known to exist in the vicinity of the active star formation region called Orion-South (Orion-S). We have mapped the velocity of low ionization features in the brightest part of the Orion Nebula, including Orion-S, and imaged the entire nebula with the Hubble Space Telescope. These new data, combined with recent high resolution radio maps of outflows from the Orion-S region, allow us to trace the origin of the optical outflows. It is confirmed that HH 625 arises from the blueshifted lobe of the CO outflow from 136-359 in Orion-S while it is likely that HH 507 arises from the blueshifted lobe of the SiO outflow from the nearby source 135-356. It is likely that redshifted lobes are deflected within the photon dominated region behind the optical nebula. This leads to a possible identification of a new large shock to the southwest from Orion-S as being driven by the redshifted CO outflow arising from 137-408. The distant object HH 400 is seen to have two even further components and th...
Safeguards instruments for Large-Scale Reprocessing Plants
Energy Technology Data Exchange (ETDEWEB)
Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)
1993-06-01
Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.
Prospects for large scale electricity storage in Denmark
DEFF Research Database (Denmark)
Krog Ekman, Claus; Jensen, Søren Højgaard
2010-01-01
In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...
Analyzing large-scale proteomics projects with latent semantic indexing.
Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning
2008-01-01
Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.
Linearly Scaling 3D Fragment Method for Large-Scale Electronic Structure Calculations
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Lee, Byounghak; Shan, Hongzhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David H.
2008-07-01
We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39percent of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFTcalculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N3) methods, and the potential for petascale computation using the LS3DF method.
Linear scaling 3D fragment method for large-scale electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Wang, Lin-Wang; Lee, Byounghak; Shan, HongZhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David
2008-07-11
We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39% of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFT calculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N{sup 3}) methods, and the potential for petascale computation using the LS3DF method.
LAMMPS strong scaling performance optimization on Blue Gene/Q
Energy Technology Data Exchange (ETDEWEB)
Coffman, Paul; Jiang, Wei; Romero, Nichols A.
2014-11-12
LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.
Distribution probability of large-scale landslides in central Nepal
Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi
2014-12-01
Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.
Dimensional Scaling for Optimized CMUT Operations
DEFF Research Database (Denmark)
Lei, Anders; Diederichsen, Søren Elmin; la Cour, Mette Funding;
2014-01-01
This work presents a dimensional scaling study using numerical simulations, where gap height and plate thickness of a CMUT cell is varied, while the lateral plate dimension is adjusted to maintain a constant transmit immersion center frequency of 5 MHz. Two cell configurations have been simulated...
A Practical Optimization Method for Designing Large PV Plants
DEFF Research Database (Denmark)
Kerekes, Tamas; Koutroulis, E.; Eyigun, S.
2011-01-01
Nowadays Photovoltaic (PV) plants have multi MW sizes, the biggest plants reaching tens of MW of capacity. Such large-scale PV plants are made up of several thousands of PV panels, each panel being in the range of 150-350W. This means that the design of a Large PV power plant is a big challenge...
Robust Optimal Adaptive Control Method with Large Adaptive Gain
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Balancing modern Power System with large scale of wind power
Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar
2014-01-01
Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...
Optimal counterterrorism and the recruitment effect of large terrorist attacks
DEFF Research Database (Denmark)
Jensen, Thomas
2011-01-01
We analyze a simple dynamic model of the interaction between terrorists and authorities. Our primary aim is to study optimal counterterrorism and its consequences when large terrorist attacks lead to a temporary increase in terrorist recruitment. First, we show that an increase in counterterrorism...... makes it more likely that terrorist cells plan small rather than large attacks and therefore may increase the probability of a successful attack. Analyzing optimal counterterrorism we see that the recruitment effect makes authorities increase the level of counterterrorism after large attacks. Therefore......, in periods following large attacks a new attack is more likely to be small compared to other periods. Finally, we analyze the long-run consequences of the recruitment effect. We show that it leads to more counterterrorism, more small attacks, and a higher sum of terrorism damage and counterterrorism costs...
Scale interactions in a mixing layer – the role of the large-scale gradients
Fiscaletti, D.
2016-02-15
© 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.
A study of MLFMA for large-scale scattering problems
Hastriter, Michael Larkin
This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.
Large-scale-vortex dynamos in planar rotating convection
Guervilly, Céline; Jones, Chris A
2016-01-01
Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...
Needs, opportunities, and options for large scale systems research
Energy Technology Data Exchange (ETDEWEB)
Thompson, G.L.
1984-10-01
The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.
Organised convection embedded in a large-scale flow
Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy
2017-04-01
In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.
Francis, Lijo
2014-04-01
The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.
Large-scale streaming motions and microwave background anisotropies
Energy Technology Data Exchange (ETDEWEB)
Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))
1989-12-01
The minimal microwave background radiation is calculated on each angular scale implied by the existence of large-scale streaming motions. These minimal anisotropies, due to the Sachs-Wolfe effect, are obtained for different experiments, and give quite different results from those found in previous work. They are not in conflict with present theories of galaxy formation. Upper limits are imposed on the scale at which large-scale streaming motions can occur by extrapolating results from present double-beam-switching experiments. 17 refs.
Probabilistic cartography of the large-scale structure
Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin
2015-01-01
The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.
Large scale and big data processing and management
Sakr, Sherif
2014-01-01
Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas
Performance Engineering of the Kernel Polynomial Method on Large-Scale CPU-GPU Systems
Kreutzer, Moritz; Wellein, Gerhard; Pieper, Andreas; Alvermann, Andreas; Fehske, Holger
2014-01-01
The Kernel Polynomial Method (KPM) is a well-established scheme in quantum physics and quantum chemistry to determine the eigenvalue density and spectral properties of large sparse matrices. In this work we demonstrate the high optimization potential and feasibility of peta-scale heterogeneous CPU-GPU implementations of the KPM. At the node level we show that it is possible to decouple the sparse matrix problem posed by KPM from main memory bandwidth both on CPU and GPU. To alleviate the effects of scattered data access we combine loosely coupled outer iterations with tightly coupled block sparse matrix multiple vector operations, which enables pure data streaming. All optimizations are guided by a performance analysis and modelling process that indicates how the computational bottlenecks change with each optimization step. Finally we use the optimized node-level KPM with a hybrid-parallel framework to perform large scale heterogeneous electronic structure calculations for novel topological materials on a pet...
Maximum length scale in density based topology optimization
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Big Data Archives: Replication and synchronizing on a large scale
King, T. A.; Walker, R. J.
2015-12-01
Modern data archives provide unique challenges to replication and synchronization because of their large size. We collect more digital information today than any time before and the volume of data collected is continuously increasing. Some of these data are from unique observations, like those from planetary missions that should be preserved for use by future generations. In addition data from NASA missions are considered federal records and must be retained. While the data may be stored on resilient hardware (i.e. RAID systems) they also must be protected from local or regional disasters. Meeting this challenge requires creating multiple copies. This task is complicated by the fact that new data are constantly being added creating what are called "active archives". Having reliable, high performance tools for replicating and synchronizing active archives in a timely fashion is critical to preservation of the data. When archives were smaller using tools like bbcp, rsync and rcp worked fairly well. While these tools are affective they are not optimized for synchronizing big data archives and their poor performance at scale lead us to develop a new tool designed specifically for big data archives. It combines the best features of git, bbcp, rsync and rcp. We call this tool "Mimic" and we discuss the design of the tool, performance comparisons and its use at NASA's Planetary Plasma Interactions (PPI) Node of the Planetary Data System (PDS).
Constraining cosmological ultra-large scale structure using numerical relativity
Braden, Jonathan; Peiris, Hiranya V; Aguirre, Anthony
2016-01-01
Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation in full General Relativity to assess the CMB quadrupole constraint on the amplitude of the initial fluctuations and the size of the observable universe relative to a length scale characterizing the ULSS. To obtain a statistically significant number of simulations, we adopt a toy model in which inhomogeneities are injected along a preferred direction. We compute the likelihood function for the CMB quadrupole including both ULSS and the standard quantum fluctuations produced during inflation. We compute the posterior given...
The large-scale dynamics of magnetic helicity
Linkmann, Moritz
2016-01-01
In this Letter we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a non-local inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic fields. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero-flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.
USAGE OF DISSIMILARITY MEASURES AND MULTIDIMENSIONAL SCALING FOR LARGE SCALE SOLAR DATA ANALYSIS
National Aeronautics and Space Administration — USAGE OF DISSIMILARITY MEASURES AND MULTIDIMENSIONAL SCALING FOR LARGE SCALE SOLAR DATA ANALYSIS Juan M Banda, Rafal Anrgyk ABSTRACT: This work describes the...
The theory of large-scale ocean circulation
National Research Council Canada - National Science Library
Samelson, R. M
2011-01-01
"This is a concise but comprehensive introduction to the basic elements of the theory of large-scale ocean circulation for advanced students and researchers"-- "Mounting evidence that human activities...
Learning networks for sustainable, large-scale improvement.
McCannon, C Joseph; Perla, Rocco J
2009-05-01
Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.
Personalized Opportunistic Computing for CMS at Large Scale
CERN. Geneva
2015-01-01
**Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...
An Evaluation Framework for Large-Scale Network Structures
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun
2004-01-01
An evaluation framework for large-scale network structures is presented, which facilitates evaluations and comparisons of different physical network structures. A number of quantitative and qualitative parameters are presented, and their importance to networks discussed. Choosing a network...
Some perspective on the Large Scale Scientific Computation Research
Institute of Scientific and Technical Information of China (English)
DU Qiang
2004-01-01
@@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.
Some perspective on the Large Scale Scientific Computation Research
Institute of Scientific and Technical Information of China (English)
DU; Qiang
2004-01-01
The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……
PetroChina to Expand Dushanzi Refinery on Large Scale
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
@@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.
Institute of Scientific and Technical Information of China (English)
肖运启; 贺贯举
2014-01-01
近年来大量风电场的并网给电网安全运行带来困难，为此电网调度部门对所辖风电场提出了严格的限发要求。变速风电机组需要从传统的最大风能利用运行模式向限功率运行模式转变，由于风力机功率特性具有强非线性，运行模式的改变对风电机组以及风电场的控制策略提出了更高的设计要求。提出了一种考虑风电机组限功率运行状态优化的风电场功率调度策略。首先，基于小扰动分析方法分析了限功率运行下风电机组非线性模型的稳定特性；然后，提出了一种限功率运行状态评价指标；接下来，建立了风电场功率调度多目标优化模型，并基于遗传算法设计了求解策略。最后，结合实际算例验证了所提调度策略的有效性。%In recent years,a large number of wind farms have been connected to grid which causes difficulty in maintaining secure grid operations,consequently,the dispatch centers often need to require the wind farms to limit their power output strictly.As a result,the operating mode of wind turbines and wind farm needs to be transferred from traditional maximum wind energy tracking to power limited operating conditions. However,given the strong nonlinear characteristics of the turbines,changing operating mode requires more complex control strategies for both the wind turbines and wind farms.A wind farm power optimum dispatching strategy,considering the wind turbine power limited operation conditions,is proposed.A small disturbance analysis method is adopted to analyze the stability for power limited operations.An assessment index is presented to evaluate the operating modes.A multi-objective optimization model is established using genetic algorithms. Numerical examples are given to validate the efficiency of the strategy proposed.
Efficient algorithms for collaborative decision making for large scale settings
DEFF Research Database (Denmark)
Assent, Ira
2011-01-01
Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....
Large-scale microwave anisotropy from gravitating seeds
Veeraraghavan, Shoba; Stebbins, Albert
1992-01-01
Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.
Temporal Variation of Large Scale Flows in the Solar Interior
Indian Academy of Sciences (India)
Sarbani Basu; H. M. Antia
2000-09-01
We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.
Large-scale coastal impact induced by a catastrophic storm
DEFF Research Database (Denmark)
Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N
breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....
BFAST: an alignment tool for large scale genome resequencing.
Directory of Open Access Journals (Sweden)
Nils Homer
Full Text Available BACKGROUND: The new generation of massively parallel DNA sequencers, combined with the challenge of whole human genome resequencing, result in the need for rapid and accurate alignment of billions of short DNA sequence reads to a large reference genome. Speed is obviously of great importance, but equally important is maintaining alignment accuracy of short reads, in the 25-100 base range, in the presence of errors and true biological variation. METHODOLOGY: We introduce a new algorithm specifically optimized for this task, as well as a freely available implementation, BFAST, which can align data produced by any of current sequencing platforms, allows for user-customizable levels of speed and accuracy, supports paired end data, and provides for efficient parallel and multi-threaded computation on a computer cluster. The new method is based on creating flexible, efficient whole genome indexes to rapidly map reads to candidate alignment locations, with arbitrary multiple independent indexes allowed to achieve robustness against read errors and sequence variants. The final local alignment uses a Smith-Waterman method, with gaps to support the detection of small indels. CONCLUSIONS: We compare BFAST to a selection of large-scale alignment tools -- BLAT, MAQ, SHRiMP, and SOAP -- in terms of both speed and accuracy, using simulated and real-world datasets. We show BFAST can achieve substantially greater sensitivity of alignment in the context of errors and true variants, especially insertions and deletions, and minimize false mappings, while maintaining adequate speed compared to other current methods. We show BFAST can align the amount of data needed to fully resequence a human genome, one billion reads, with high sensitivity and accuracy, on a modest computer cluster in less than 24 hours. BFAST is available at (http://bfast.sourceforge.net.
Vector dissipativity theory for large-scale impulsive dynamical systems
Directory of Open Access Journals (Sweden)
Haddad Wassim M.
2004-01-01
Full Text Available Modern complex large-scale impulsive systems involve multiple modes of operation placing stringent demands on controller analysis of increasing complexity. In analyzing these large-scale systems, it is often desirable to treat the overall impulsive system as a collection of interconnected impulsive subsystems. Solution properties of the large-scale impulsive system are then deduced from the solution properties of the individual impulsive subsystems and the nature of the impulsive system interconnections. In this paper, we develop vector dissipativity theory for large-scale impulsive dynamical systems. Specifically, using vector storage functions and vector hybrid supply rates, dissipativity properties of the composite large-scale impulsive systems are shown to be determined from the dissipativity properties of the impulsive subsystems and their interconnections. Furthermore, extended Kalman-Yakubovich-Popov conditions, in terms of the impulsive subsystem dynamics and interconnection constraints, characterizing vector dissipativeness via vector system storage functions, are derived. Finally, these results are used to develop feedback interconnection stability results for large-scale impulsive dynamical systems using vector Lyapunov functions.
Output regulation of large-scale hydraulic networks with minimal steady state power consumption
Jensen, Tom Nørgaard; Wisniewski, Rafał; De Persis, Claudio; Kallesøe, Carsten Skovmose
2014-01-01
An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact that the syste
Line Capacity Expansion and Transmission Switching in Power Systems With Large-Scale Wind Power
DEFF Research Database (Denmark)
Villumsen, Jonas Christoffer; Bronmo, Geir; Philpott, Andy B.
2013-01-01
of power generation. We allow for active switching of transmission elements to reduce congestion effects caused by Kirchhoff's voltage law. Results show that actively switching transmission lines may yield a better utilization of transmission networks with large-scale wind power and increase wind power...... penetration. Furthermore, it is shown that transmission switching is likely to affect the optimal line capacity expansion plan....
Experimental and numerical friction characterization for large-scale forming simulations
Hol, J.; Meinders, Vincent T.; van den Boogaard, Antonius H.; Hora, P.
2013-01-01
A new trend in forming simulation technology is the development of friction models applicable to large scale forming simulations. In this respect, the optimization of forming processes and the success of newly developed friction models requires a complete understanding of the tribological behavior
Institute of Scientific and Technical Information of China (English)
Qin Ni
2001-01-01
An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and possesses Q-quadratic convergence rate. The global convergence of this new method is proved,the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.
Reliability Evaluation considering Structures of a Large Scale Wind Farm
DEFF Research Database (Denmark)
Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei
2012-01-01
evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...
Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.
Squire, J; Bhattacharjee, A
2015-10-23
We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.
GroFi: Large-scale fiber placement research facility
Directory of Open Access Journals (Sweden)
Christian Krombholz
2016-03-01
and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of new materials, technologies and processes on both, small coupons, but also large components such as wing covers or fuselage skins.
Large Scale Survey Data in Career Development Research
Diemer, Matthew A.
2008-01-01
Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…
Cost Overruns in Large-scale Transportation Infrastructure Projects
DEFF Research Database (Denmark)
Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E
2010-01-01
Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...
Lessons from Large-Scale Renewable Energy Integration Studies: Preprint
Energy Technology Data Exchange (ETDEWEB)
Bird, L.; Milligan, M.
2012-06-01
In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.
How large-scale subsidence affects stratocumulus transitions (discussion paper)
Van der Dussen, J.J.; De Roode, S.R.; Siebesma, A.P.
2015-01-01
Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP) budget from large-eddy simulation (LES) results of
Planck intermediate results XLII. Large-scale Galactic magnetic fields
DEFF Research Database (Denmark)
Adam, R.; Ade, P. A. R.; Alves, M. I. R.
2016-01-01
Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...
Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy
Directory of Open Access Journals (Sweden)
Leandros Perivolaropoulos
2014-01-01
Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.
Magnetic Helicity and Large Scale Magnetic Fields: A Primer
Blackman, Eric G
2014-01-01
Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. H...
Magnetic fields of our Galaxy on large and small scales
Han, Jinlin
2007-01-01
Magnetic fields have been observed on all scales in our Galaxy, from AU to kpc. With pulsar dispersion measures and rotation measures, we can directly measure the magnetic fields in a very large region of the Galactic disk. The results show that the large-scale magnetic fields are aligned with the spiral arms but reverse their directions many times from the inner-most arm (Norma) to the outer arm (Perseus). The Zeeman splitting measurements of masers in HII regions or star-formation regions not only show the structured fields inside clouds, but also have a clear pattern in the global Galactic distribution of all measured clouds which indicates the possible connection of the large-scale and small-scale magnetic fields.
A relativistic signature in large-scale structure
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Large-Scale Inverse Problems and Quantification of Uncertainty
Biegler, Lorenz; Ghattas, Omar
2010-01-01
Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca
Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems
DEFF Research Database (Denmark)
Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore
2008-01-01
Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...
Global Optimization Using Diffusion Perturbations with Large Noise Intensity
Institute of Scientific and Technical Information of China (English)
G. Yin; K. Yin
2006-01-01
This work develops an algorithm for global optimization. The algorithm is of gradient ascent type and uses random perturbations. In contrast to the annealing type procedures, the perturbation noise intensity is large. We demonstrate that by properly varying the noise intensity, approximations to the global maximum can be achieved. We also show that the expected time to reach the domain of attraction of the global maximum,which can be approximated by the solution of a boundary value problem, is finite. Discrete-time algorithms are proposed; recursive algorithms with occasional perturbations involving large noise intensity are developed.Numerical examples are provided for illustration.
Large-Scale Integrated Carbon Nanotube Gas Sensors
Kim, Joondong
2012-01-01
Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...
Acoustic Studies of the Large Scale Ocean Circulation
Menemenlis, Dimitris
1999-01-01
Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
Energy Technology Data Exchange (ETDEWEB)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.
Balancing modern Power System with large scale of wind power
DEFF Research Database (Denmark)
Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela
2014-01-01
Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...
Transient stability of large scale system using efficient network reduction technique
Energy Technology Data Exchange (ETDEWEB)
Shenoy, D.L.; Belapurkar, R.K.; Raghavan, R.; Nanda, J.; Kothari, D.P.
1981-12-01
An efficient yet very simple technique incorporating Brown's axis discarding technique and optimal ordering of nodes for reducing a large scale power system has been described and its use in obtaining rapid transient stability solutions has been explained. The technique developed can also be used in short circuit analysis of a large power system when one is interested in finding out short circuit levels at a few buses in the system. 5 refs.
Institute of Scientific and Technical Information of China (English)
彭春华; 谢鹏; 陈臣
2014-01-01
ABSTRACT:With the increase of penetration of photovoltaic power, the randomness and volatility of photovoltaic power output would have a greater impact on power system optimization scheduling. To ensure the reliability of optimal scheduling, this paper applied box set robust optimization theory into power system optimization scheduling. To coordinate the contradiction between the reliability and economy of system scheduling, the concept of uncertainty budget was applied to achieve robust optimization in adjustable uncertain intervals, and to make up for the conservation deficiencies which box set robust optimization method has. The robust optimization model in adjustable uncertain intervals was established for power system to achieve coordination between reliability and economy. Based on the constructed optimization model, this paper derived an uncertainty budget decision- making method, which can effectively reduce the blindness in the uncertainty budgetary decision-making. Finally, differential evolution algorithm was employed to solve the dynamic optimization dispatch problems. The feasibility and rationality of the constituted model is verified by a testing example.%随着光伏电站接入电网的比例不断提高，光伏电站出力的随机性和波动性给电力系统优化调度带来较大影响。为保证优化调度的可靠性，提出将盒式集合鲁棒优化理论引入到含大规模光伏电站的电力系统优化调度中。同时为了协调系统调度中可靠性与经济性之间的矛盾，提出引入不确定性预算的概念以实现不确定区间可调节鲁棒优化，弥补盒式集合鲁棒优化偏于保守的不足，构建可靠性与经济性相协调的含光伏电站的电力系统不确定区间可调节鲁棒优化调度模型。并根据所构建的优化调度模型推导出一个不确定性预算决策方法，从而降低不确定性预算决策的盲目性。最后采用微分进化算法对提出的动态优
STEADY-STATE HIERARCHICAL INTELLIGENT CONTROL OF LARGE-SCALE INDUSTRIAL PROCESSES
Institute of Scientific and Technical Information of China (English)
WAN Baiwu
2004-01-01
This paper considers the fourth stage of development of hierarchical control ofindustrial processes to the intelligent control and optimization stage, and reviews what theauthor and his Group have been investigating for the past decade in the on-line steady-state hierarchical intelligent control of large-scale industrial processes (LSIP)This papergives a definition of intelligent control of large-scale systems first, and then reviews the useof neural networks for identification and optimization, the use of expert systems to solvesome kinds of hierarchical multi-objective optimization problems by an intelligent decisionunit (ID), the use of fuzzy logic control, and the use of iterative learning controlSeveralimplementation examples are introducedThis paper reviews other main achievements ofthe Group alsoFinally this paper gives a perspective of future development.
Benders' Decomposition Based Heuristics for Large-Scale Dynamic Quadratic Assignment Problems
Directory of Open Access Journals (Sweden)
Sirirat Muenvanichakul
2009-01-01
Full Text Available Problem statement: Dynamic Quadratic Assignment Problem (DQAP is NP hard problem. Benders decomposition based heuristics method is applied to the equivalent mixed-integer linear programming problem of the original DQAP. Approach: Approximate Benders Decomposition (ABD generates the ensemble of a subset of feasible layout for Approximate Dynamic Programming (ADP to determine the sub-optimal optimal solution. A Trust-Region Constraint (TRC for the master problem in ABD and a Successive Adaptation Procedure (SAP were implemented to accelerate the convergence rate of the method. Results: The sub-optimal solutions of large-scales DQAPs from the method and its variants were compared well with other metaheuristic methods. Conclusion: Overall performance of the method is comparable to other metaheuristic methods for large-scale DQAPs.
VESPA: Very large-scale Evolutionary and Selective Pressure Analyses
Directory of Open Access Journals (Sweden)
Andrew E. Webb
2017-06-01
Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.
Information fusion based optimal control for large civil aircraft system.
Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen
2015-03-01
Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
The Phoenix series large scale LNG pool fire experiments.
Energy Technology Data Exchange (ETDEWEB)
Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.
2010-12-01
The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.
Large-scale Contextual Effects in Early Human Visual Cortex
Directory of Open Access Journals (Sweden)
Sung Jun Joo
2012-10-01
Full Text Available A commonly held view about neurons in early visual cortex is that they serve as localized feature detectors. Here, however, we demonstrate that the responses of neurons in early visual cortex are sensitive to global visual patterns. Using multiple methodologies–psychophysics, fMRI, and EEG–we measured neural responses to an oriented Gabor (“target” embedded in various orientation patterns. Specifically, we varied whether a central target deviated from its context by changing distant orientations while leaving the immediately neighboring flankers unchanged. The results of psychophysical contrast adaptation and fMRI experiments show that a target that deviates from its context results in more neural activity compared to a target that is grouped into an alternating pattern. For example, the neural response to a vertically oriented target was greater when it deviated from the orientation of flankers (HHVHH compared to when it was grouped into an alternating pattern (VHVHV. We then found that this pattern-sensitive response manifests in the earliest sensory component of the event-related potential to the target. Finally, in a forced-choice classification task of “noise” stimuli, perceptions are biased to “see” an orientation that deviates from its context. Our results show that neurons in early visual cortex are sensitive to large-scale global patterns in images in a way that is more sophisticated than localized feature detection. Our results showing a reduced neural response to statistical redundancies in images is not only optimal from an information theory perspective but also takes into account known energy constraints in neural processing.
Ultra-large scale cosmology with next-generation experiments
Alonso, David; Ferreira, Pedro G; Maartens, Roy; Santos, Mario G
2015-01-01
Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for a range of future large-scale structure surveys: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and continuum surveys of radio galaxies. Our forecasts show that next-generation experiments, reaching out to redshifts z ~ 4, will not be able to detect previously-undetected general-relativistic effects from the single-tracer power spectra alone, although they may be able to measure the lensing magnification in the auto-correlation. We also perform a rigorous joint forecast for the detection of primordial non-...
Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development
Chuss, D T; Amiri, M; Appel, J; Bennett, C L; Colazo, F; Denis, K L; Dünner, R; Essinger-Hileman, T; Eimer, J; Fluxa, P; Gothe, D; Halpern, M; Harrington, K; Hilton, G; Hinshaw, G; Hubmayr, J; Iuliano, J; Marriage, T A; Miller, N; Moseley, S H; Mumby, G; Petroff, M; Reintsema, C; Rostem, K; U-Yen, K; Watts, D; Wagner, E; Wollack, E J; Xu, Z; Zeng, L
2015-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe $\\sim$70% of the sky. A variable-delay polarization modulator (VPM) modulates the polarization at $\\sim$10 Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that span both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously d...
Observational signatures of modified gravity on ultra-large scales
Baker, Tessa
2015-01-01
Extremely large surveys with future experiments like Euclid and the SKA will soon allow us to access perturbation modes close to the Hubble scale, with wavenumbers $k \\sim {\\cal H}$. If a modified gravity theory is responsible for cosmic acceleration, the Hubble scale is a natural regime for deviations from General Relativity (GR) to become manifest. The majority of studies to date have concentrated on the consequences of alternative gravity theories for the subhorizon, quasi-static regime, however. We investigate how modifications to the gravitational field equations affect perturbations around the Hubble scale, and how this translates into deviations of ultra large-scale relativistic observables from their GR behaviour. Adopting a model-independent ethos that relies only on the broad physical properties of gravity theories, we find that the deviations of the observables are small unless modifications to GR are drastic. The angular dependence and redshift evolution of the deviations is highly parameterisatio...
Seismic safety in conducting large-scale blasts
Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.
2017-09-01
In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.
Human pescadillo induces large-scale chromatin unfolding
Institute of Scientific and Technical Information of China (English)
ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong
2005-01-01
The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.
Transport of Large Scale Poloidal Flux in Black Hole Accretion
Beckwith, Kris; Krolik, Julian H
2009-01-01
We perform a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We identify a ``coronal mechanism'' of magnetic flux motion, which dominates the global flux evolution. In this coronal mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. This mechanism is supplemented by a smaller amount of flux advection in the accretion flow proper. Because the black hole in this case does not rotate, the magnetic flux on the horizon determines the mean magnetic field strength in the funnel around the disk axis; this field strength is regulated by a combination of th...
First Mile Challenges for Large-Scale IoT
Bader, Ahmed
2017-03-16
The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.
Large Scale Anomalies of the Cosmic Microwave Background with Planck
DEFF Research Database (Denmark)
Frejsel, Anne Mette
This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....
Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows
Ebrahimi, F
2015-01-01
For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.
Large Scale Anomalies of the Cosmic Microwave Background with Planck
DEFF Research Database (Denmark)
Frejsel, Anne Mette
This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....
Large-Scale Image Analytics Using Deep Learning
Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.
2014-12-01
High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The
Large-scale microwave anisotropy from gravitating seeds
Energy Technology Data Exchange (ETDEWEB)
Veeraraghavan, S.; Stebbins, A. (Massachusetts, University, Amherst (United States) NASA/Fermilab Astrophysics Center, Batavia, Il (United States))
1992-08-01
Topological defects could have seeded primordial inhomogeneities in cosmological matter. The authors examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. The authors describe the pattern of the resulting large angular scale microwave anisotropy. 17 refs.
Information Tailoring Enhancements for Large-Scale Social Data
2016-09-26
Social Data Final Report Reporting Period: September 22, 2015 – September 16, 2016 Contract No. N00014-15-P-5138 Sponsored by ONR...Report September 22, 20 15 - September 16, 20 16 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Information Tailoring Enhancements for Large-Scale Social ...goals of(i) further enhancing capability to analyze unstructured social media data at scale and rapidly, and (ii) improving IAI social media software
Systematic Literature Review of Agile Scalability for Large Scale Projects
Directory of Open Access Journals (Sweden)
Hina saeeda
2015-09-01
Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.
The Proposal of Scaling the Roles in Scrum of Scrums for Distributed Large Projects
Directory of Open Access Journals (Sweden)
Abeer M. AlMutairi
2015-07-01
Full Text Available Scrum of scrums is an approach used to scale the traditional Scrum methodology to fit for the development of complex and large projects. However, scaling the roles of scrum members brought new challenges especially in distributed and large software projects. This paper describes in details the roles of each scrum member in scrum of scrum to propose a solution to use a dedicated product owner for a team and inclusion of sub-backlog. The main goal of the proposed solution is to optimize the role of product owner for distributed large projects. The proposed changes will increase cohesiveness among scrum teams and it will also eliminate duplication of work. Survey is used as a research design to evaluate the proposed solution. The results are found encouraging supporting the proposed solution. It is anticipated that the proposed solution will help the software companies to scale Scrum methodology effectively for large and complex software projects.
Approximation of the optimal compensator for a large space structure
Mackay, M. K.
1983-01-01
This paper considers the approximation of the optimal compensator for a Large Space Structure. The compensator is based upon a solution to the Linear Stochastic Quadratic Regulator problem. Colocation of sensors and actuators is assumed. A small gain analytical solution for the optimal compensator is obtained for a single input/single output system, i.e., certain terms in the compensator can be neglected for sufficiently small gain. The compensator is calculated in terms of the kernel to a Volterra integral operator using a Neumann series. The calculation of the compensator is based upon the C sub 0 semigroup for the infinite dimensional system. A finite dimensional approximation of the compensator is, therefore, obtained through analysis of the infinite dimensional compensator which is a compact operator.
Approximation of the optimal compensator for a large space structure
Mackay, M. K.
1983-01-01
This paper considers the approximation of the optimal compensator for a Large Space Structure. The compensator is based upon a solution to the Linear Stochastic Quadratic Regulator problem. Colocation of sensors and actuators is assumed. A small gain analytical solution for the optimal compensator is obtained for a single input/single output system, i.e., certain terms in the compensator can be neglected for sufficiently small gain. The compensator is calculated in terms of the kernel to a Volterra integral operator using a Neumann series. The calculation of the compensator is based upon the C sub 0 semigroup for the infinite dimensional system. A finite dimensional approximation of the compensator is, therefore, obtained through analysis of the infinite dimensional compensator which is a compact operator.
Drawing Large Graphs by Multilevel Maxent-Stress Optimization.
Meyerhenke, Henning; Nollenburg, Martin; Schulz, Christian
2017-03-29
Drawing large graphs appropriately is an important step for the visual analysis of data from real-world networks. Here we present a novel multilevel algorithm to compute a graph layout with respect to the maxent-stress metric proposed by Gansner et al. (2013) that combines layout stress and entropy. As opposed to previous work, we do not solve the resulting linear systems of the maxent-stress metric with a typical numerical solver. Instead we use a simple local iterative scheme within a multilevel approach. To accelerate local optimization, we approximate long-range forces and use shared-memory parallelism. Our experiments validate the high potential of our approach, which is particularly appealing for dynamic graphs. In comparison to the previously best maxent-stress optimizer, which is sequential, our parallel implementation is on average 30 times faster already for static graphs (and still faster if executed on a single thread) while producing a comparable solution quality.
Large-scale synthesis of YSZ nanopowder by Pechini method
Indian Academy of Sciences (India)
Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki
2014-08-01
Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.
Practical Large Scale Syntheses of New Drug Candidates
Institute of Scientific and Technical Information of China (English)
Hui-Yin; Li
2001-01-01
This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (＞99% de, ＞99 A% and ＞99 wt%). Some practical and GMP aspects of process development will be also discussed.……
Fatigue Analysis of Large-scale Wind turbine
Directory of Open Access Journals (Sweden)
Zhu Yongli
2017-01-01
Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.
Distributed chaos tuned to large scale coherent motions in turbulence
Bershadskii, A
2016-01-01
It is shown, using direct numerical simulations and laboratory experiments data, that distributed chaos is often tuned to large scale coherent motions in anisotropic inhomogeneous turbulence. The examples considered are: fully developed turbulent boundary layer (range of coherence: $14 < y^{+} < 80$), turbulent thermal convection (in a horizontal cylinder), and Cuette-Taylor flow. Two ways of the tuning have been described: one via fundamental frequency (wavenumber) and another via subharmonic (period doubling). For the second way the large scale coherent motions are a natural component of distributed chaos. In all considered cases spontaneous breaking of space translational symmetry is accompanied by reflexional symmetry breaking.
Large-scale liquid scintillation detectors for solar neutrinos
Energy Technology Data Exchange (ETDEWEB)
Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)
2016-04-15
Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)
Practical Large Scale Syntheses of New Drug Candidates
Institute of Scientific and Technical Information of China (English)
Hui-Yin Li
2001-01-01
@@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (＞99% de, ＞99 A% and ＞99 wt%). Some practical and GMP aspects of process development will be also discussed.
Fast paths in large-scale dynamic road networks
Nannicini, Giacomo; Barbier, Gilles; Krob, Daniel; Liberti, Leo
2007-01-01
Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.
Cinlar Subgrid Scale Model for Large Eddy Simulation
Kara, Rukiye
2016-01-01
We construct a new subgrid scale (SGS) stress model for representing the small scale effects in large eddy simulation (LES) of incompressible flows. We use the covariance tensor for representing the Reynolds stress and include Clark's model for the cross stress. The Reynolds stress is obtained analytically from Cinlar random velocity field, which is based on vortex structures observed in the ocean at the subgrid scale. The validity of the model is tested with turbulent channel flow computed in OpenFOAM. It is compared with the most frequently used Smagorinsky and one-equation eddy SGS models through DNS data.
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
Visualizing large-scale uncertainty in astrophysical data.
Li, Hongwei; Fu, Chi-Wing; Li, Yinggang; Hanson, Andrew
2007-01-01
Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary signicantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.
MODIFIED BOTTLENECK-BASED PROCEDURE FOR LARGE-SCALE FLOW-SHOP SCHEDULING PROBLEMS WITH BOTTLENECK
Institute of Scientific and Technical Information of China (English)
ZUO Yan; GU Hanyu; XI Yugeng
2006-01-01
A new bottleneck-based heuristic for large-scale flow-shop scheduling problems with a bottleneck is proposed, which is simpler but more tailored than the shifting bottleneck (SB)procedure. In this algorithm, a schedule for the bottleneck machine is first constructed optimally and then the non-bottleneck machines are scheduled around the bottleneck schedule by some effective dispatching rules. Computational results show that the modified bottleneck-based procedure can achieve a tradeoff between solution quality and computational time comparing with SB procedure for medium-size problems. Furthermore it can obtain a good solution in quite short time for large-scale scheduling problems.
Quantization Audio Watermarking with Optimal Scaling on Wavelet Coefficients
Chen, S -T; Tu, S -Y
2011-01-01
In recent years, discrete wavelet transform (DWT) provides an useful platform for digital information hiding and copyright protection. Many DWT-based algorithms for this aim are proposed. The performance of these algorithms is in term of signal-to-noise ratio (SNR) and bit-error-rate (BER) which are used to measure the quality and the robustness of an embedded audio. However, there is a tradeoff relationship between the embedded-audio quality and robustness. The tradeoff relationship is a signal processing problem in the wavelet domain. To solve this problem, this study presents an optimization-based scaling scheme using optimal multi-coefficients quantization in the wavelet domain. Firstly, the multi-coefficients quantization technique is rewritten as an equation with arbitrary scaling on DWT coefficients and set SNR to be a performance index. Then, a functional connecting the equation and the performance index is derived. Secondly, Lagrange Principle is used to obtain the optimal solution. Thirdly, the scal...
Large-Scale Agriculture and Outgrower Schemes in Ethiopia
DEFF Research Database (Denmark)
Wendimu, Mengistu Assefa
As a result of the growing demand for food, feed and industrial raw materials in the first decade of this century, and the usually welcoming policies regarding investors amongst the governments of developing countries, there has been a renewed interest in agriculture and an increase in large...... to ‘land grabbing’ for large-scale farming (i.e. outgrower schemes and contract farming could modernise agricultural production while allowing smallholders to maintain their land ownership), to integrate them into global agro-food value chains and to increase their productivity and welfare. However......, the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...
Interaction Analysis and Decomposition Principle for Control Structure Design of Large-scale Systems
Institute of Scientific and Technical Information of China (English)
罗雄麟; 刘雨波; 许锋
2014-01-01
Industrial processes are mostly large-scale systems with high order. They use fully centralized control strategy, the parameters of which are difficult to tune. In the design of large-scale systems, the decomposition ac-cording to the interaction between input and output variables is the first step and the basis for the selection of con-trol structure. In this paper, the decomposition principle of processes in large-scale systems is proposed for the de-sign of control structure. A new variable pairing method is presented, considering the steady-state information and dynamic response of large-scale system. By selecting threshold values, the related matrix can be transformed into the adjoining matrixes, which directly measure the couple among different loops. The optimal number of controllers can be obtained after decomposing the large-scale system. A practical example is used to demonstrate the validity and feasibility of the proposed interaction decomposition principle in process large-scale systems.
Complexity Measurement of Large-Scale Software System Based on Complex Network
Directory of Open Access Journals (Sweden)
Dali Li
2014-05-01
Full Text Available With the increase of software system complexity, the traditional measurements can not meet the requirements, for the reason that the developers need control the software quality effectively and guarantee the normal operation of software system. Hence how to measure the complexity of large-scale software system has been a challenge problem. In order to solve this problem, the developers have to obtain a good method to measure the complexity of software system first. Only through this work, the software quality and the software structure could be controlled and optimized. Note that the complex network theory has offered a new theoretical understanding and a new perspective to solve this kind of complexity problem, this work discusses the complexity phenomenon in large-scale software system. Based on this, some complexity measurements of large-scale software system are put forward from static structure and dynamic structure perspectives. Furthermore, we find some potential complexity characteristics in large-scale software networks through the numerical simulations. The proposed measurement methods have a guiding significance on the development for today's large-scale software system. In addition, this paper presents a new technique for the structural complexity measurements of large-scale software system
A Review of Scaling Agile Methods in Large Software Development
Directory of Open Access Journals (Sweden)
Mashal Alqudah
2016-12-01
Full Text Available Agile methods such as Dynamic Systems Development Method (DSDM, Extreme Programming (XP, SCRUM, Agile Modeling (AM and Crystal Clear enable small teams to execute assigned task at their best. However, larger organizations aim at incorporating more Agile methods owing to the fact that its application is prevalently tailored for small teams. The scope in which large firms are interested will extend the original Agile methods to include larger teams, coordination, communication among teams and customers as well as oversight. Determining particular software method is always challenging for software companies especially when considering start-up, small to medium or large enterprises. Most of large organizations develop large-scale projects by teams of teams or teams of teams of teams. Therefore, most recognized Agile methods or first-generation methods such as XP and SCRUM need to be modified before they are employed in large organizations; which is not an easy task. Accomplishing said task would necessitate large organizations to pick and select from the scaling Agile methods in accommodating a single vision for large and multiple teams. Deciding the right choice requires wholesome understanding of the method including its strengths and weaknesses as well as when and how it makes sense. Therefore, the main aim of this paper is to review the existing literature of the utilized scaling Agile methods by defining, discussing and comparing them. In-depth reviews on the literature were performed to juxtapose the methods in impartial manner. In addition, the content analysis was used to analyse the resultant data. The result indicated that the DAD, LeSS, LeSS huge, SAFe, Spotify, Nexus and RAGE are the adopted scaling Agile methods at large organizations. They seem to be similar but there are discrepancies among them that take the form of team size, training and certification, methods and practices adopted, technical practices required and organizational
Institute of Scientific and Technical Information of China (English)
刘琛
2016-01-01
随着化石能源枯竭和大气污染等问题的日益深化,风力发电在现阶段的发展中得到了快速的发展.藉此,本文立足于现阶段我国风力发电发展现状,对风电的电力系统储能电源的优化配置进行了深入的研究.%With the deepening of fossil energy depletion and air pollution,the development of wind power at the present stage has been rapid development. In this paper,based on the current situation of wind power development in China,the optimal allocation of energy storage power of wind power system is studied in this paper.
Network synchronization: optimal and pessimal scale-free topologies
Energy Technology Data Exchange (ETDEWEB)
Donetti, Luca [Departamento de Electronica y Tecnologia de Computadores and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hurtado, Pablo I; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto Carlos I de Fisica Teorica y Computacional Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain)], E-mail: mamunoz@onsager.ugr.es
2008-06-06
By employing a recently introduced optimization algorithm we construct optimally synchronizable (unweighted) networks for any given scale-free degree distribution. We explore how the optimization process affects degree-degree correlations and observe a generic tendency toward disassortativity. Still, we show that there is not a one-to-one correspondence between synchronizability and disassortativity. On the other hand, we study the nature of optimally un-synchronizable networks, that is, networks whose topology minimizes the range of stability of the synchronous state. The resulting 'pessimal networks' turn out to have a highly assortative string-like structure. We also derive a rigorous lower bound for the Laplacian eigenvalue ratio controlling synchronizability, which helps understanding the impact of degree correlations on network synchronizability.