WorldWideScience

Sample records for multi-scale problem arising

  1. Multi-objective convex programming problem arising in multivariate ...

    African Journals Online (AJOL)

    user

    Multi-objective convex programming problem arising in ... However, although the consideration of multiple objectives may seem a novel concept, virtually any nontrivial ..... Solving multiobjective programming problems by discrete optimization.

  2. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  3. A Multi-Depot Two-Echelon Vehicle Routing Problem with Delivery Options Arising in the Last Mile Distribution

    NARCIS (Netherlands)

    Zhou, Lin; Baldacci, Roberto; Vigo, Daniele; Wang, Xu

    2018-01-01

    In this paper, we introduce a new city logistics problem arising in the last mile distribution of e-commerce. The problem involves two levels of routing problems. The first requires a design of the routes for a vehicle fleet located at the depots to transport the customer demands to a subset of the

  4. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  5. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  6. A Spectral Multi-Domain Penalty Method for Elliptic Problems Arising From a Time-Splitting Algorithm For the Incompressible Navier-Stokes Equations

    Science.gov (United States)

    Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter

    2017-11-01

    The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.

  7. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Directory of Open Access Journals (Sweden)

    Nguyen Trung Kien

    2017-01-01

    Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  8. Understanding hydraulic fracturing: a multi-scale problem

    Science.gov (United States)

    Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.

    2016-01-01

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789

  9. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  10. An efficient heuristic for the multi-compartment vehicle routing problem

    OpenAIRE

    Paulo Vitor Silvestrin

    2016-01-01

    We study a variant of the vehicle routing problem that allows vehicles with multiple compartments. The need for multiple compartments frequently arises in practical applications when there are several products of different quality or type, that must be kept or handled separately. The resulting problem is called the multi-compartment vehicle routing problem (MCVRP). We propose a tabu search heuristic and embed it into an iterated local search to solve the MCVRP. In several experiments we analy...

  11. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale

  12. Solving the Selective Multi-Category Parallel-Servicing Problem

    DEFF Research Database (Denmark)

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    In this paper we present a new scheduling problem and describe a shortest path based heuristic as well as a dynamic programming based exact optimization algorithm to solve it. The Selective Multi-Category Parallel-Servicing Problem (SMCPSP) arises when a set of jobs has to be scheduled on a server...... (machine) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process...

  13. Solving the selective multi-category parallel-servicing problem

    DEFF Research Database (Denmark)

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    In this paper, we present a new scheduling problem and describe a shortest path-based heuristic as well as a dynamic programming-based exact optimization algorithm to solve it. The selective multi-category parallel-servicing problem arises when a set of jobs has to be scheduled on a server (machine......) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process in each time...

  14. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    Science.gov (United States)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  15. Conformal-Based Surface Morphing and Multi-Scale Representation

    Directory of Open Access Journals (Sweden)

    Ka Chun Lam

    2014-05-01

    Full Text Available This paper presents two algorithms, based on conformal geometry, for the multi-scale representations of geometric shapes and surface morphing. A multi-scale surface representation aims to describe a 3D shape at different levels of geometric detail, which allows analyzing or editing surfaces at the global or local scales effectively. Surface morphing refers to the process of interpolating between two geometric shapes, which has been widely applied to estimate or analyze deformations in computer graphics, computer vision and medical imaging. In this work, we propose two geometric models for surface morphing and multi-scale representation for 3D surfaces. The basic idea is to represent a 3D surface by its mean curvature function, H, and conformal factor function λ, which uniquely determine the geometry of the surface according to Riemann surface theory. Once we have the (λ, H parameterization of the surface, post-processing of the surface can be done directly on the conformal parameter domain. In particular, the problem of multi-scale representations of shapes can be reduced to the signal filtering on the λ and H parameters. On the other hand, the surface morphing problem can be transformed to an interpolation process of two sets of (λ, H parameters. We test the proposed algorithms on 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed methods can effectively obtain multi-scale surface representations and give natural surface morphing results.

  16. Variational problems arising in classical mechanics and nonlinear elasticity

    International Nuclear Information System (INIS)

    Spencer, P.

    1999-01-01

    In this thesis we consider two different classes of variational problems. First, one-dimensional problems arising from classical mechanics where the problem is to determine whether there is a unique function η 0 (x) which minimises the energy functional of the form I(η) = ∫ a b L(x,η(x), η'(x)) dx. We will investigate uniqueness by making a change of dependent and independent variables and showing that for a class of integrands L with a particular kind of scaling invariance the resulting integrand is completely convex. The change of variables arises by applying results from Lie group theory as applied in the study of differential equations and this work is motivated by [60] and [68]. Second, the problem of minimising energy functionals of the form E(u) = ∫ A W(∇u(x)) dx in the case of a nonlinear elastic body occupying an annular region A contains R 2 with u : A-bar → A-bar. This work is motivated by [57] (in particular the example of paragraph 4). We will consider rotationally symmetric deformations satisfying prescribed boundary conditions. We will show the existence of minimisers for stored energy functions of the form W(F) = g-tilde(vertical bar-F-vertical bar, det(F)) in a class of general rotationally symmetric deformations of a compressible annulus and for stored energy functions of the form W(F) = g-bar(vertical bar-F-vertical bar) in a class of rotationally symmetric deformations of an incompressible annulus. We will also show that in each case the minimisers are solutions of the full equilibrium equations. A model problem will be considered where the energy functional is the Dirichlet integral and it will be shown that the rotationally symmetric solution obtained is a minimiser among admissible non-rotationally symmetric deformations. In the case of an incompressible annulus, we will consider the Dirichlet integral as the energy functional and show that the rotationally symmetric equilibrium solutions in this case are weak local minimisers in

  17. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  18. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    Energy Technology Data Exchange (ETDEWEB)

    Vokkarane, Vinod [University of Massachusetts

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  19. 2D deblending using the multi-scale shaping scheme

    Science.gov (United States)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  20. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  1. An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics

    Science.gov (United States)

    Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel

    2012-01-01

    In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764

  2. An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics.

    Science.gov (United States)

    Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel

    2012-12-01

    In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.

  3. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  4. Inverse scattering problems with multi-frequencies

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi

    2015-01-01

    This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)

  5. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  6. IUTAM Symposium on Innovative Numerical Approaches for Materials and Structures in Multi-Field and Multi-Scale Problems : in Honor of Michael Ortiz's 60th Birthday

    CERN Document Server

    Pandolfi, Anna

    2016-01-01

    This book provides readers with a detailed insight into diverse and exciting recent developments in computational solid mechanics, documenting new perspectives and horizons. The topics addressed cover a wide range of current research, from computational materials modeling, including crystal plasticity, micro-structured materials, and biomaterials, to multi-scale simulations of multi-physics phenomena. Particular emphasis is placed on pioneering discretization methods for the solution of coupled non-linear problems at different length scales. The book, written by leading experts, reflects the remarkable advances that have been made in the field over the past decade and more, largely due to the development of a sound mathematical background and efficient computational strategies. The contents build upon the 2014 IUTAM symposium celebrating the 60th birthday of Professor Michael Ortiz, to whom this book is dedicated. His work has long been recognized as pioneering and is a continuing source of inspiration for ma...

  7. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  8. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  9. Multi-resolution and multi-scale simulation of the thermal hydraulics in fast neutron reactor assemblies

    International Nuclear Information System (INIS)

    Angeli, P.-E.

    2011-01-01

    The present work is devoted to a multi-scale numerical simulation of an assembly of fast neutron reactor. In spite of the rapid growth of the computer power, the fine complete CFD of a such system remains out of reach in a context of research and development. After the determination of the thermalhydraulic behaviour of the assembly at the macroscopic scale, we propose to carry out a local reconstruction of the fine scale information. The complete approach will require a much lower CPU time than the CFD of the entire structure. The macro-scale description is obtained using either the volume averaging formalism in porous media, or an alternative modeling historically developed for the study of fast neutron reactor assemblies. It provides some information used as constraint of a down-scaling problem, through a penalization technique of the local conservation equations. This problem lean on the periodic nature of the structure by integrating periodic boundary conditions for the required microscale fields or their spatial deviation. After validating the methodologies on some model applications, we undertake to perform them on 'industrial' configurations which demonstrate the viability of this multi-scale approach. (author) [fr

  10. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  11. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  12. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  13. Multi-Scale Dissemination of Time Series Data

    DEFF Research Database (Denmark)

    Guo, Qingsong; Zhou, Yongluan; Su, Li

    2013-01-01

    In this paper, we consider the problem of continuous dissemination of time series data, such as sensor measurements, to a large number of subscribers. These subscribers fall into multiple subscription levels, where each subscription level is specified by the bandwidth constraint of a subscriber......, which is an abstract indicator for both the physical limits and the amount of data that the subscriber would like to handle. To handle this problem, we propose a system framework for multi-scale time series data dissemination that employs a typical tree-based dissemination network and existing time...

  14. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  15. Using packaged software for solving two differential equation problems that arise in plasma physics

    International Nuclear Information System (INIS)

    Gaffney, P.W.

    1980-01-01

    Experience in using packaged numerical software for solving two related problems that arise in Plasma physics is described. These problems are (i) the solution of the reduced resistive MHD equations and (ii) the solution of the Grad-Shafranov equation

  16. Multi-Stage Transportation Problem With Capacity Limit

    Directory of Open Access Journals (Sweden)

    I. Brezina

    2010-06-01

    Full Text Available The classical transportation problem can be applied in a more general way in practice. Related problems as Multi-commodity transportation problem, Transportation problems with different kind of vehicles, Multi-stage transportation problems, Transportation problem with capacity limit is an extension of the classical transportation problem considering the additional special condition. For solving such problems many optimization techniques (dynamic programming, linear programming, special algorithms for transportation problem etc. and heuristics approaches (e.g. evolutionary techniques were developed. This article considers Multi-stage transportation problem with capacity limit that reflects limits of transported materials (commodity quantity. Discussed issues are: theoretical base, problem formulation as way as new proposed algorithm for that problem.

  17. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  18. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  19. New homotopy analysis transform method for solving the discontinued problems arising in nanotechnology

    International Nuclear Information System (INIS)

    Khader, M. M.; Kumar, Sunil; Abbasbandy, S.

    2013-01-01

    We present a new reliable analytical study for solving the discontinued problems arising in nanotechnology. Such problems are presented as nonlinear differential—difference equations. The proposed method is based on the Laplace transform with the homotopy analysis method (HAM). This method is a powerful tool for solving a large amount of problems. This technique provides a series of functions which may converge to the exact solution of the problem. A good agreement between the obtained solution and some well-known results is obtained

  20. Multi-Stage Transportation Problem With Capacity Limit

    OpenAIRE

    I. Brezina; Z. Čičková; J. Pekár; M. Reiff

    2010-01-01

    The classical transportation problem can be applied in a more general way in practice. Related problems as Multi-commodity transportation problem, Transportation problems with different kind of vehicles, Multi-stage transportation problems, Transportation problem with capacity limit is an extension of the classical transportation problem considering the additional special condition. For solving such problems many optimization techniques (dynamic programming, linear programming, special algor...

  1. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  2. Energy-Efficient Scheduling Problem Using an Effective Hybrid Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Lvjiang Yin

    2016-12-01

    Full Text Available Nowadays, manufacturing enterprises face the challenge of just-in-time (JIT production and energy saving. Therefore, study of JIT production and energy consumption is necessary and important in manufacturing sectors. Moreover, energy saving can be attained by the operational method and turn off/on idle machine method, which also increases the complexity of problem solving. Thus, most researchers still focus on small scale problems with one objective: a single machine environment. However, the scheduling problem is a multi-objective optimization problem in real applications. In this paper, a single machine scheduling model with controllable processing and sequence dependence setup times is developed for minimizing the total earliness/tardiness (E/T, cost, and energy consumption simultaneously. An effective multi-objective evolutionary algorithm called local multi-objective evolutionary algorithm (LMOEA is presented to tackle this multi-objective scheduling problem. To accommodate the characteristic of the problem, a new solution representation is proposed, which can convert discrete combinational problems into continuous problems. Additionally, a multiple local search strategy with self-adaptive mechanism is introduced into the proposed algorithm to enhance the exploitation ability. The performance of the proposed algorithm is evaluated by instances with comparison to other multi-objective meta-heuristics such as Nondominated Sorting Genetic Algorithm II (NSGA-II, Strength Pareto Evolutionary Algorithm 2 (SPEA2, Multiobjective Particle Swarm Optimization (OMOPSO, and Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D. Experimental results demonstrate that the proposed LMOEA algorithm outperforms its counterparts for this kind of scheduling problems.

  3. Interactive Approach for Multi-Level Multi-Objective Fractional Programming Problems with Fuzzy Parameters

    Directory of Open Access Journals (Sweden)

    M.S. Osman

    2018-03-01

    Full Text Available In this paper, an interactive approach for solving multi-level multi-objective fractional programming (ML-MOFP problems with fuzzy parameters is presented. The proposed interactive approach makes an extended work of Shi and Xia (1997. In the first phase, the numerical crisp model of the ML-MOFP problem has been developed at a confidence level without changing the fuzzy gist of the problem. Then, the linear model for the ML-MOFP problem is formulated. In the second phase, the interactive approach simplifies the linear multi-level multi-objective model by converting it into separate multi-objective programming problems. Also, each separate multi-objective programming problem of the linear model is solved by the ∊-constraint method and the concept of satisfactoriness. Finally, illustrative examples and comparisons with the previous approaches are utilized to evince the feasibility of the proposed approach.

  4. Single string planning problem arising in liner shipping industries: A heuristic approach

    DEFF Research Database (Denmark)

    Gelareh, Shahin; Neamatian Monemi, Rahimeh; Mahey, Philippe

    2013-01-01

    We propose an efficient heuristic approach for solving instances of the Single String Planning Problem (SSPP) arising in the liner shipping industry. In the SSPP a Liner Service Provider (LSP) only revises one of its many operational strings, and it is assumed that the other strings are unchangea...

  5. Scaling Phenomena in Desalination With Multi Stage Flash Distillation (MSF)

    International Nuclear Information System (INIS)

    Siti-Alimah

    2006-01-01

    Assessment of scaling phenomena in MSF desalination has been carried out. Scale is one of predominantly problem in multi stage flash (MSF) desalination installation. The main types of scale in MSF are carbonate calcium (CaCO 3 ), hydroxide magnesium (Mg(OH) 2 ) and sulphate calcium (CaSO 4 ). CaCO 3 and Mg(OH) 2 scales result from the thermal decomposition of bicarbonate ion, however sulphate calcium scale result from reaction of calcium ion and sulfate ion present in seawater. The rate of formation scale in seawater depends on temperature, pH, concentration of ions, supersaturated solution, nucleation and diffusion. The scales in MSF installation can occur inside heat exchanger tube, brine heater tubes, water boxes, on the face of tube sheets and demister pads. Scaling reduces effectiveness (production and heat consumption) of the process. To avoid the reductions in performance caused by scale precipitation, desalination units employ scale control. To control this scaling problem, the following methods can be used; acid, additive (scale inhibitors) and mechanical cleaning. Stoichiometric amounts of acid must be added to seawater, because addition excess of acid will increase corrosion problems. Using of scale inhibitors as polyphosphates, phosphonates, polyacrylates and poly maleates have advantage and disadvantage. (author)

  6. A geometrical multi-scale numerical method for coupled hygro-thermo-mechanical problems in photovoltaic laminates.

    Science.gov (United States)

    Lenarda, P; Paggi, M

    A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.

  7. Radioisotopes: problems of responsibility arising from medicine

    International Nuclear Information System (INIS)

    Dupon, Michel.

    1978-09-01

    Radioisotopes have brought about great progress in the battle against illnesses of mainly tumoral origin, whether in diagnosis (nuclear medicine) or in treatment (medical radiotherapy). They are important enough therefore to warrant investigation. Such a study is attempted here, with special emphasis, at a time when medical responsibility proceedings are being taken more and more often on the medicolegal problems arising from their medical use. It is hoped that this study on medical responsibility in the use of radioisotopes will have shown: that the use of radioisotopes for either diagnosis or therapy constitutes a major banch of medicine; that this importance implies an awareness by the practitioner of a vast responsibility, especially in law where legislation to ensure protection as strict as in the field of ionizing radiations is lacking. The civil responsibility of doctors who use radioisotopes remains to be defined, since for want of adequate jurisprudence we are reduced to hypotheses based on general principles [fr

  8. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  9. Nuclear waste management and problems arising from constitutional law

    International Nuclear Information System (INIS)

    Rauschning, D.

    1983-01-01

    The author discusses the problems arising in the field of nuclear waste management on account of the constitutional law. Especially the difficulties emanating from the conflict between the provisions of section 9a of the Atomic Energy Act and the provisions of constitutional law are dealt with in detail, referring to the monography of H. Hofmann, 'legal aspects of nuclear waste management'. The author comes to the conclusion that the reqquirements laid down in section 9a-9c of the Atomic Energy Act are in agreement with the Basic law. There is, he says, no unreasonable risk for future generations, as the provisions of the nuclear law provide for sufficient safety of sites and equipment selected for the final storage of nuclear waste, ensuring that radioactive leakage is excluded over long periods of time. In the second part of his lecture, the author discusses the problem of competency and delegation of authority with regard to the reprocessing of radioactive waste. (BW) [de

  10. THE MULTIPLE CHOICE PROBLEM WITH INTERACTIONS BETWEEN CRITERIA

    Directory of Open Access Journals (Sweden)

    Luiz Flavio Autran Monteiro Gomes

    2015-12-01

    Full Text Available ABSTRACT An important problem in Multi-Criteria Decision Analysis arises when one must select at least two alternatives at the same time. This can be denoted as a multiple choice problem. In other words, instead of evaluating each of the alternatives separately, they must be combined into groups of n alternatives, where n = 2. When the multiple choice problem must be solved under multiple criteria, the result is a multi-criteria, multiple choice problem. In this paper, it is shown through examples how this problemcan be tackled on a bipolar scale. The Choquet integral is used in this paper to take care of interactions between criteria. A numerical application example is conducted using data from SEBRAE-RJ, a non-profit private organization that has the mission of promoting competitiveness, sustainable developmentand entrepreneurship in the state of Rio de Janeiro, Brazil. The paper closes with suggestions for future research.

  11. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    Science.gov (United States)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  12. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-05-10

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  13. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  14. An augmented Lagrangian multi-scale dictionary learning algorithm

    Directory of Open Access Journals (Sweden)

    Ye Meng

    2011-01-01

    Full Text Available Abstract Learning overcomplete dictionaries for sparse signal representation has become a hot topic fascinated by many researchers in the recent years, while most of the existing approaches have a serious problem that they always lead to local minima. In this article, we present a novel augmented Lagrangian multi-scale dictionary learning algorithm (ALM-DL, which is achieved by first recasting the constrained dictionary learning problem into an AL scheme, and then updating the dictionary after each inner iteration of the scheme during which majorization-minimization technique is employed for solving the inner subproblem. Refining the dictionary from low scale to high makes the proposed method less dependent on the initial dictionary hence avoiding local optima. Numerical tests for synthetic data and denoising applications on real images demonstrate the superior performance of the proposed approach.

  15. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    Directory of Open Access Journals (Sweden)

    Keisuke Fujisaki

    2013-11-01

    Full Text Available To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model and the homogeneous model (macro-model. However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity.

  16. On some nonlinear problems arising in the physics of ionized gases

    International Nuclear Information System (INIS)

    Hilhorst-Goldman, D.

    1981-01-01

    The author reports results obtained by rigorous analysis of a nonlinear differential equation for the electron density nsub(e) in a specific type of electrical discharge. The problem is essentially two-dimensional. She discusses in particular the escape of electrons to infinity above a critical temperature and the boundary layer exhibited by nsub(e) near zero temperature. A singular boundary value problem arising in a pre-breakdown gas discharge is discussed. A Coulomb gas is considered in a special experimental situation: the pre-breakdown gas discharge between two electrodes. The equation for the negative charge density can be formulated as a nonlinear parabolic equation degenerate at the origin. The existence and uniqueness of the solution are proved as well as the asymptotic stability of its unique steady state. Some results are also given about the rate of convergence. The variational characterisation of the limit solution of a singular perturbation problem and variational analysis of a perturbed free boundary problem are considered. (Auth./C.F.)

  17. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  18. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  19. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.

    2009-09-01

    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  20. Approximating multi-objective scheduling problems

    NARCIS (Netherlands)

    Dabia, S.; Talbi, El-Ghazali; Woensel, van T.; Kok, de A.G.

    2013-01-01

    In many practical situations, decisions are multi-objective by nature. In this paper, we propose a generic approach to deal with multi-objective scheduling problems (MOSPs). The aim is to determine the set of Pareto solutions that represent the interactions between the different objectives. Due to

  1. Multi-scale and multi-orientation medical image analysis

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Deserno, T.M.

    2011-01-01

    Inspired by multi-scale and multi-orientation mechanisms recognized in the first stages of our visual system, this chapter gives a tutorial overview of the basic principles. Images are discrete, measured data. The optimal aperture for an observation with as little artefacts as possible, is derived

  2. The Dynamic Multi-Period Vehicle Routing Problem

    DEFF Research Database (Denmark)

    Wen, Min; Cordeau, Jean-Francois; Laporte, Gilbert

    This paper considers the Dynamic Multi-Period Vehicle Routing Problem which deals with the distribution of orders from a depot to a set of customers over a multi-period time horizon. Customer orders and their feasible service periods are dynamically revealed over time. The objectives are to minim......This paper considers the Dynamic Multi-Period Vehicle Routing Problem which deals with the distribution of orders from a depot to a set of customers over a multi-period time horizon. Customer orders and their feasible service periods are dynamically revealed over time. The objectives...... are to minimize total travel costs and customer waiting, and to balance the daily workload over the planning horizon. This problem originates from a large distributor operating in Sweden. It is modeled as a mixed integer linear program, and solved by means of a three-phase heuristic that works over a rolling...... planning horizon. The multi-objective aspect of the problem is handled through a scalar technique approach. Computational results show that our solutions improve upon those of the Swedish distributor....

  3. Scaling and criticality in a stochastic multi-agent model of a financial market

    Science.gov (United States)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  4. Improved convergence of gradient-based reconstruction using multi-scale models

    International Nuclear Information System (INIS)

    Cunningham, G.S.; Hanson, K.M.; Koyfman, I.

    1996-01-01

    Geometric models have received increasing attention in medical imaging for tasks such as segmentation, reconstruction, restoration, and registration. In order to determine the best configuration of the geometric model in the context of any of these tasks, one needs to perform a difficult global optimization of an energy function that may have many local minima. Explicit models of geometry, also called deformable models, snakes, or active contours, have been used extensively to solve image segmentation problems in a non-Bayesian framework. Researchers have seen empirically that multi-scale analysis is useful for convergence to a configuration that is near the global minimum. In this type of analysis, the image data are convolved with blur functions of increasing resolution, and an optimal configuration of the snake is found for each blurred image. The configuration obtained using the highest resolution blur is used as the solution to the global optimization problem. In this article, the authors use explicit models of geometry for a variety of Bayesian estimation problems, including image segmentation, reconstruction and restoration. The authors introduce a multi-scale approach that blurs the geometric model, rather than the image data, and show that this approach turns a global, highly nonquadratic optimization into a sequence of local, approximately quadratic problems that converge to the global minimum. The result is a deterministic, robust, and efficient optimization strategy applicable to a wide variety of Bayesian estimation problems in which geometric models of images are an important component

  5. A Holistic Approach with Special Reference to Heat Transfer in Multi-Component Porous Media Systems

    Directory of Open Access Journals (Sweden)

    A. K. Borah

    2010-06-01

    Full Text Available Problems involving multiphase flow, heat transfer and multi-component mass transport in porous media arise in a number of scientific engineering disciplines. Important technological applications include thermally enhanced oil recovery, subsurface contamination and remediation, capillary assisted thermal technologies, drying process, thermal insulation materials, multiphase trickle bed reactors, nuclear reactor safety analysis, high level radioactive waste repositories and geothermal energy exploitation. In this paper we demonstrate multiphase flows in porous media are driven by gravitational, capillary and viscous forces. But gravity causes phase migration in the direction of the gravitational field. Microscopic modelling efforts were made to accurately incorporate microscopic interfacial phenomena. Multi-scale modelling approaches were attempted in order to transmit information over various lengths scales, ranging from micro-scale, meso-scale, macro-scale and finally to the field scale.

  6. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  7. Dynamical scales for multi-TeV top-pair production at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Czakon, Michał [Institut für Theoretische Teilchenphysik und Kosmologie, RWTH Aachen University,Aachen, D-52056 (Germany); Heymes, David; Mitov, Alexander [Cavendish Laboratory, University of Cambridge,Cambridge, CB3 0HE (United Kingdom)

    2017-04-12

    We calculate all major differential distributions with stable top-quarks at the LHC. The calculation covers the multi-TeV range that will be explored during LHC Run II and beyond. Our results are in the form of high-quality binned distributions. We offer predictions based on three different parton distribution function (pdf) sets. In the near future we will make our results available also in the more flexible fastNLO format that allows fast re-computation with any other pdf set. In order to be able to extend our calculation into the multi-TeV range we have had to derive a set of dynamic scales. Such scales are selected based on the principle of fastest perturbative convergence applied to the differential and inclusive cross-section. Many observations from our study are likely to be applicable and useful to other precision processes at the LHC. With scale uncertainty now under good control, pdfs arise as the leading source of uncertainty for TeV top production. Based on our findings, true precision in the boosted regime will likely only be possible after new and improved pdf sets appear. We expect that LHC top-quark data will play an important role in this process.

  8. Homogenization-based interval analysis for structural-acoustic problem involving periodical composites and multi-scale uncertain-but-bounded parameters.

    Science.gov (United States)

    Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong

    2017-04-01

    This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.

  9. A simultaneous facility location and vehicle routing problem arising in health care logistics in the Netherlands

    NARCIS (Netherlands)

    Veenstra, Marjolein; Roodbergen, Kees Jan; Coelho, Leandro C.; Zhu, Stuart X.

    2018-01-01

    This paper introduces a simultaneous facility location and vehicle routing problem that arises in health care logistics in the Netherlands. In this problem, the delivery of medication from a local pharmacy can occur via lockers, from where patients that are within the coverage distance of a locker

  10. Iterative solution of a nonlinear system arising in phase change problems

    International Nuclear Information System (INIS)

    Williams, M.A.

    1987-01-01

    We consider several iterative methods for solving the nonlinear system arising from an enthalpy formulation of a phase change problem. We present the formulation of the problem. Implicit discretization of the governing equations results in a mildly nonlinear system at each time step. We discuss solving this system using Jacobi, Gauss-Seidel, and SOR iterations and a new modified preconditioned conjugate gradient (MPCG) algorithm. The new MPCG algorithm and its properties are discussed in detail. Numerical results are presented comparing the performance of the SOR algorithm and the MPCG algorithm with 1-step SSOR preconditioning. The MPCG algorithm exhibits a superlinear rate of convergence. The SOR algorithm exhibits a linear rate of convergence. Thus, the MPCG algorithm requires fewer iterations to converge than the SOR algorithm. However in most cases, the SOR algorithm requires less total computation time than the MPCG algorithm. Hence, the SOR algorithm appears to be more appropriate for the class of problems considered. 27 refs., 11 figs

  11. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  12. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  13. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  14. Quenching rate for a nonlocal problem arising in the micro-electro mechanical system

    Science.gov (United States)

    Guo, Jong-Shenq; Hu, Bei

    2018-03-01

    In this paper, we study the quenching rate of the solution for a nonlocal parabolic problem which arises in the study of the micro-electro mechanical system. This question is equivalent to the stabilization of the solution to the transformed problem in self-similar variables. First, some a priori estimates are provided. In order to construct a Lyapunov function, due to the lack of time monotonicity property, we then derive some very useful and challenging estimates by a delicate analysis. Finally, with this Lyapunov function, we prove that the quenching rate is self-similar which is the same as the problem without the nonlocal term, except the constant limit depends on the solution itself.

  15. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  16. Evaluation of the multi-sums for large scale problems

    International Nuclear Information System (INIS)

    Bluemlein, J.; Hasselhuhn, A.; Schneider, C.

    2012-02-01

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  17. Evaluation of the multi-sums for large scale problems

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation

    2012-02-15

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  18. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  19. Transitions of the Multi-Scale Singularity Trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    Multi-Scale Singularity Trees(MSSTs) [10] are multi-scale image descriptors aimed at representing the deep structures of images. Changes in images are directly translated to changes in the deep structures; therefore transitions in MSSTs. Because MSSTs can be used to represent the deep structure...

  20. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  1. Continuous Energy, Multi-Dimensional Transport Calculations for Problem Dependent Resonance Self-Shielding

    International Nuclear Information System (INIS)

    Downar, T.

    2009-01-01

    The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multidimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. Specifically, the methods here utilize the existing continuous energy SCALE5 module, CENTRM, and the multi-dimensional discrete ordinates solver, NEWT to develop a new code, CENTRM( ) NEWT. The work here addresses specific theoretical limitations in existing CENTRM resonance treatment, as well as investigates advanced numerical and parallel computing algorithms for CENTRM and NEWT in order to reduce the computational burden. The result of the work here will be a new computer code capable of performing problem dependent self-shielding analysis for both existing and proposed GENIV fuel designs. The objective of the work was to have an immediate impact on the safety analysis of existing reactors through improvements in the calculation of fuel temperature effects, as well as on the analysis of more sophisticated GENIV/NGNP systems through improvements in the depletion/transmutation of actinides for Advanced Fuel Cycle Initiatives.

  2. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  3. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  4. Dust in fusion devices-a multi-faceted problem connecting high- and low-temperature plasma physics

    International Nuclear Information System (INIS)

    Winter, J

    2004-01-01

    Small particles with sizes between a few nanometers and a few 10 μm (dust) are formed in fusion devices by plasma-surface interaction processes. Though it is not a major problem today, dust is considered a problem that could arise in future long pulse fusion devices. This is primarily due to its radioactivity and due to its very high chemical reactivity. Dust formation is particularly pronounced when carbonaceous wall materials are used. Dust particles can be transported in the tokamak over significant distances. Radioactivity leads to electrical charging of dust and to its interaction with plasmas and electric fields. This may cause interference with the discharge but may also result in options for particle removal. This paper discusses some of the multi-faceted problems using information both from fusion research and from low-temperature dusty plasma work

  5. Neural Correlates of Moral Evaluation and Psychopathic Traits in Male Multi-Problem Young Adults

    Directory of Open Access Journals (Sweden)

    Josjan Zijlmans

    2018-06-01

    Full Text Available Multi-problem young adults (18–27 years present with a plethora of problems, including varying degrees of psychopathic traits. The amygdala and ventromedial prefrontal cortex (vmPFC have been implicated in moral dysfunction in psychopathy in adolescents and adults, but no studies have been performed in populations in the transitional period to adulthood. We tested in multi-problem young adults the hypothesis that psychopathic traits are related to amygdala and vmPFC activity during moral evaluation. Additionally, we explored the relation between psychopathic traits and other regions consistently implicated in moral evaluation. Our final sample consisted of 100 multi-problem young adults and 22 healthy controls. During fMRI scanning, participants judged whether pictures showed a moral violation on a 1–4 scale. Whole brain analysis revealed neural correlates of moral evaluation consistent with the literature. Region of interest analyses revealed positive associations between the affective callous-unemotional dimension of psychopathy and activation in the left vmPFC, left superior temporal gyrus, and left cingulate. Our results are consistent with altered vmPFC function during moral evaluation in psychopathy, but we did not find evidence for amygdala involvement. Our findings indicate the affective callous-unemotional trait of psychopathy may be related to widespread altered activation patterns during moral evaluation in multi-problem young adults.

  6. The Dynamic Multi-objective Multi-vehicle Covering Tour Problem

    Science.gov (United States)

    2013-06-01

    144 [38] Coello, Carlos A. Coello, Gary B Lamont, and David A Van Veldhuizen . Evolutionary Algorithms for Solving Multi-Objective Problems. Springer...Traveling Repairperson Problem (DTRP) Policies Proposed by Bertsimas and Van Ryzin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3...queuing theory perspective. Table 3.2: DTRP Policies Proposed by Bertsimas and Van Ryzin. Name Description First Come First Serve (FCFS) vehicles

  7. The trend of the multi-scale temporal variability of precipitation in Colorado River Basin

    Science.gov (United States)

    Jiang, P.; Yu, Z.

    2011-12-01

    Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.

  8. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  9. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  10. Multi-scale biomedical systems: measurement challenges

    International Nuclear Information System (INIS)

    Summers, R

    2016-01-01

    Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper. (paper)

  11. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  12. Multi-Scale Scattering Transform in Music Similarity Measuring

    Science.gov (United States)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  13. Distribution-valued weak solutions to a parabolic problem arising in financial mathematics

    Directory of Open Access Journals (Sweden)

    Michael Eydenberg

    2009-07-01

    Full Text Available We study distribution-valued solutions to a parabolic problem that arises from a model of the Black-Scholes equation in option pricing. We give a minor generalization of known existence and uniqueness results for solutions in bounded domains $Omega subset mathbb{R}^{n+1}$ to give existence of solutions for certain classes of distributions $fin mathcal{D}'(Omega$. We also study growth conditions for smooth solutions of certain parabolic equations on $mathbb{R}^nimes (0,T$ that have initial values in the space of distributions.

  14. The development of a multi-dimensional gambling accessibility scale.

    Science.gov (United States)

    Hing, Nerilee; Haw, John

    2009-12-01

    The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility.

  15. Problems With Deployment of Multi-Domained, Multi-Homed Mobile Networks

    Science.gov (United States)

    Ivancic, William D.

    2008-01-01

    This document describes numerous problems associated with deployment of multi-homed mobile platforms consisting of multiple networks and traversing large geographical areas. The purpose of this document is to provide insight to real-world deployment issues and provide information to groups that are addressing many issues related to multi-homing, policy-base routing, route optimization and mobile security - particularly those groups within the Internet Engineering Task Force.

  16. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    Science.gov (United States)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  17. Scrubbing up: multi-scale investigation of woody encroachment in a southern African savannah

    OpenAIRE

    Marston, Christopher G.; Aplin, Paul; Wilkinson, David M.; Field, Richard; O'Regan, Hannah J.

    2017-01-01

    Changes in the extent of woody vegetation represent a major conservation question in many savannah systems around the globe. To address the problem of the current lack of broad-scale cost-effective tools for land cover monitoring in complex savannah environments, we use a multi-scale approach to quantifying vegetation change in Kruger National Park (KNP), South Africa. We test whether medium spatial resolution satellite data (Landsat, existing back to the 1970s), which have pixel sizes larger...

  18. A Real-time Generalization and Multi-scale Visualization Method for POI Data in Volunteered Geographic Information

    Directory of Open Access Journals (Sweden)

    YANG Min

    2015-02-01

    Full Text Available With the development of mobile and Web technologies, there has been an increasing number of map-based mushups which display different kinds of POI data in volunteered geographic information. Due to the lack of suitable mechanisms for multi-scale visualization, the display of the POI data often result in the icon clustering problem with icons touching and overlapping each other. This paper introduces a multi-scale visualization method for urban facility POI data by combing the classic methods of generalization and on-line environment. Firstly, we organize the POI data into hierarchical structure by preprocessing in the server-side; the POI features then will be obtained based on the display scale in the client-side and the displacement operation will be executed to resolve the local icon conflicts. Experiments show that this approach can not only achieve the requirements of real-time online, but also can get better multi-scale representation of POI data.

  19. Multi-scale symbolic transfer entropy analysis of EEG

    Science.gov (United States)

    Yao, Wenpo; Wang, Jun

    2017-10-01

    From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.

  20. An Evolutionary Approach for Bilevel Multi-objective Problems

    Science.gov (United States)

    Deb, Kalyanmoy; Sinha, Ankur

    Evolutionary multi-objective optimization (EMO) algorithms have been extensively applied to find multiple near Pareto-optimal solutions over the past 15 years or so. However, EMO algorithms for solving bilevel multi-objective optimization problems have not received adequate attention yet. These problems appear in many applications in practice and involve two levels, each comprising of multiple conflicting objectives. These problems require every feasible upper-level solution to satisfy optimality of a lower-level optimization problem, thereby making them difficult to solve. In this paper, we discuss a recently proposed bilevel EMO procedure and show its working principle on a couple of test problems and on a business decision-making problem. This paper should motivate other EMO researchers to engage more into this important optimization task of practical importance.

  1. Calculation of Rayleigh type sums for zeros of the equation arising in spectral problem

    Science.gov (United States)

    Kostin, A. B.; Sherstyukov, V. B.

    2017-12-01

    For zeros of the equation (arising in the oblique derivative problem) μ J n ‧ ( μ ) cos α + i n J n ( μ ) sin α = 0 , μ ∈ ℂ , with parameters n ∈ ℤ, α ∈ [-π/2, π/2] and the Bessel function Jn (μ) special summation relationships are proved. The obtained results are consistent with the theory of well-known Rayleigh sums calculating by zeros of the Bessel function.

  2. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  3. Multi-dimensional Bin Packing Problems with Guillotine Constraints

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Pisinger, David

    2010-01-01

    The problem addressed in this paper is the decision problem of determining if a set of multi-dimensional rectangular boxes can be orthogonally packed into a rectangular bin while satisfying the requirement that the packing should be guillotine cuttable. That is, there should exist a series of face...... parallel straight cuts that can recursively cut the bin into pieces so that each piece contains a box and no box has been intersected by a cut. The unrestricted problem is known to be NP-hard. In this paper we present a generalization of a constructive algorithm for the multi-dimensional bin packing...... problem, with and without the guillotine constraint, based on constraint programming....

  4. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  5. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  6. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    Science.gov (United States)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  7. Multi spectral scaling data acquisition system

    International Nuclear Information System (INIS)

    Behere, Anita; Patil, R.D.; Ghodgaonkar, M.D.; Gopalakrishnan, K.R.

    1997-01-01

    In nuclear spectroscopy applications, it is often desired to acquire data at high rate with high resolution. With the availability of low cost computers, it is possible to make a powerful data acquisition system with minimum hardware and software development, by designing a PC plug-in acquisition board. But in using the PC processor for data acquisition, the PC can not be used as a multitasking node. Keeping this in view, PC plug-in acquisition boards with on-board processor find tremendous applications. Transputer based data acquisition board has been designed which can be configured as a high count rate pulse height MCA or as a Multi Spectral Scaler. Multi Spectral Scaling (MSS) is a new technique, in which multiple spectra are acquired in small time frames and are then analyzed. This paper describes the details of this multi spectral scaling data acquisition system. 2 figs

  8. Multi-scale porous materials: from adsorption and poro-mechanics properties to energy and environmental applications

    International Nuclear Information System (INIS)

    Pellenq, Roland J.M.

    2012-01-01

    Document available in extended abstract form only. 'Multi-scale Porous Materials under the Nano-scope'. Setting up the stage, one can list important engineering problems such as hydrogen storage for transportation applications, electric energy storage in batteries, CO 2 sequestration in used coal mines, earthquake mechanisms, durability of nuclear fuels, stability of soils and sediment and cements and concrete cohesive properties in the context of sustainability. With the exception of health, these are basically the challenging engineering problems of the coming century that address energy, environment and natural hazards. Behind all those problems are complex multi-scale porous materials that have a confined fluid in their pore void: water in the case of clays and cement, an electrolyte in the case of batteries and super-capacitors, weakly interacting molecular fluids in the case of hydrogen storage devices, gas-shale and nuclear fuel bars. So what do we mean by 'under the nano-scope'? The nano-scope does not exist as a single experimental technique able of assessing the 3D texture of complex multi-scale material. Obviously techniques such as TEM are part of the answer but are not the 'nano-scope' in itself. In our idea, the 'nano-scope' is more than a technique producing images. It is rather a concept that links a suite of modeling techniques coupled with experiments (electron and X-rays microscopies, tomography, nano-indentation, nano-scratching...). Fig 1 gives an outline of this strategy for cement. It allows accessing material texture, their chemistry, their mechanical behavior, their adsorption/condensation behavior at all scales starting from the nano-scale upwards. The toolbox of the simulation aspect of the 'nano-scope' is akin to a statistical physics description of material texture and properties including the thermodynamics and dynamics of the fluids confined to their pore voids as a means to linking atomic scale properties to macroscopic properties

  9. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    Science.gov (United States)

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  10. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  11. Multi-time, multi-scale correlation functions in turbulence and in turbulent models

    NARCIS (Netherlands)

    Biferale, L.; Boffetta, G.; Celani, A.; Toschi, F.

    1999-01-01

    A multifractal-like representation for multi-time, multi-scale velocity correlation in turbulence and dynamical turbulent models is proposed. The importance of subleading contributions to time correlations is highlighted. The fulfillment of the dynamical constraints due to the equations of motion is

  12. MINLO: Multi-scale improved NLO

    CERN Document Server

    Hamilton, Keith; Zanderighi, Giulia

    2012-01-01

    In the present work we consider the assignment of the factorization and renormalization scales in hadron collider processes with associated jet production, at next-to-leading order (NLO) in perturbation theory. We propose a simple, definite prescription to this end, including Sudakov form factors to consistently account for the distinct kinematic scales occuring in such collisions. The scheme yields results that are accurate at NLO and, for a large class of observables, it resums to all orders the large logarithms that arise from kinematic configurations involving disparate scales. In practical terms the method is most simply understood as an NLO extension of the matrix element reweighting procedure employed in tree level matrix element-parton shower merging algorithms. By way of a proof-of-concept, we apply the method to Higgs and Z boson production in association with up to two jets.

  13. Reliability of Multi-Category Rating Scales

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  14. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  15. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  16. Iterative equalization for OFDM systems over wideband Multi-Scale Multi-Lag channels

    NARCIS (Netherlands)

    Xu, T.; Tang, Z.; Remis, R.; Leus, G.

    2012-01-01

    OFDM suffers from inter-carrier interference (ICI) when the channel is time varying. This article seeks to quantify the amount of interference resulting from wideband OFDM channels, which are assumed to follow the multi-scale multi-lag (MSML) model. The MSML channel model results in full channel

  17. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  18. Kapteyn series arising in radiation problems

    International Nuclear Information System (INIS)

    Lerche, I; Tautz, R C

    2008-01-01

    In discussing radiation from multiple point charges or magnetic dipoles, moving in circles or ellipses, a variety of Kapteyn series of the second kind arises. Some of the series have been known in closed form for a hundred years or more, others appear not to be available to analytic persuasion. This paper shows how 12 such generic series can be developed to produce either closed analytic expressions or integrals that are not analytically tractable. In addition, the method presented here may be of benefit when one has other Kapteyn series of the second kind to consider, thereby providing an additional reason to consider such series anew

  19. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  20. Unified Modeling Language description of the object-oriented multi-scale adaptive finite element method for Step-and-Flash Imprint Lithography Simulations

    International Nuclear Information System (INIS)

    Paszynski, Maciej; Gurgul, Piotr; Sieniek, Marcin; Pardo, David

    2010-01-01

    In the first part of the paper we present the multi-scale simulation of the Step-and-Flash Imprint Lithography (SFIL), a modern patterning process. The simulation utilizes the hp adaptive Finite Element Method (hp-FEM) coupled with Molecular Statics (MS) model. Thus, we consider the multi-scale problem, with molecular statics applied in the areas of the mesh where the highest accuracy is required, and the continuous linear elasticity with thermal expansion coefficient applied in the remaining part of the domain. The degrees of freedom from macro-scale element's nodes located on the macro-scale side of the interface have been identified with particles from nano-scale elements located on the nano-scale side of the interface. In the second part of the paper we present Unified Modeling Language (UML) description of the resulting multi-scale application (hp-FEM coupled with MS). We investigated classical, procedural codes from the point of view of the object-oriented (O-O) programming paradigm. The discovered hierarchical structure of classes and algorithms makes the UML project as independent on the spatial dimension of the problem as possible. The O-O UML project was defined at an abstract level, independent on the programming language used.

  1. Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise.

    Science.gov (United States)

    Niizato, Takayuki; Murakami, Hisashi

    2018-01-01

    Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions.

  2. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.

  3. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    Science.gov (United States)

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  4. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  5. Multi-scale magnetic field intermittence in the plasma sheet

    Directory of Open Access Journals (Sweden)

    Z. Vörös

    2003-09-01

    Full Text Available This paper demonstrates that intermittent magnetic field fluctuations in the plasma sheet exhibit transitory, localized, and multi-scale features. We propose a multifractal-based algorithm, which quantifies intermittence on the basis of the statistical distribution of the "strength of burstiness", estimated within a sliding window. Interesting multi-scale phenomena observed by the Cluster spacecraft include large-scale motion of the current sheet and bursty bulk flow associated turbulence, interpreted as a cross-scale coupling (CSC process.Key words. Magnetospheric physics (magnetotail; plasma sheet – Space plasma physics (turbulence

  6. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    Science.gov (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  7. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  8. Regularization of EIT reconstruction based on multi-scales wavelet transforms

    Directory of Open Access Journals (Sweden)

    Gong Bo

    2016-09-01

    Full Text Available Electrical Impedance Tomography (EIT intends to obtain the conductivity distribution of a domain from the electrical boundary conditions. This is an ill-posed inverse problem usually solved on finite element meshes. Wavelet transforms are widely used for medical image reconstruction. However, because of the irregular form of the finite element meshes, the canonical wavelet transforms is impossible to perform on meshes. In this article, we present a framework that combines multi-scales wavelet transforms and finite element meshes by viewing meshes as undirected graphs and applying spectral graph wavelet transform on the meshes.

  9. The Soccer-Ball Problem

    Science.gov (United States)

    Hossenfelder, Sabine

    2014-07-01

    The idea that Lorentz-symmetry in momentum space could be modified but still remain observer-independent has received quite some attention in the recent years. This modified Lorentz-symmetry, which has been argued to arise in Loop Quantum Gravity, is being used as a phenomenological model to test possibly observable effects of quantum gravity. The most pressing problem in these models is the treatment of multi-particle states, known as the 'soccer-ball problem'. This article briefly reviews the problem and the status of existing solution attempts.

  10. Multi-scaling of the dense plasma focus

    Science.gov (United States)

    Saw, S. H.; Lee, S.

    2015-03-01

    The dense plasma focus is a copious source of multi-radiations with many potential new applications of special interest such as in advanced SXR lithography, materials synthesizing and testing, medical isotopes and imaging. This paper reviews the series of numerical experiments conducted using the Lee model code to obtain the scaling laws of the multi-radiations.

  11. MULTI-CRITERIA PROGRAMMING METHODS AND PRODUCTION PLAN OPTIMIZATION PROBLEM SOLVING IN METAL INDUSTRY

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-09-01

    Full Text Available This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained results indicate a high efficiency of the applied methods in solving the problem.

  12. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    Science.gov (United States)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  13. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    International Nuclear Information System (INIS)

    Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-01-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge

  14. MULTI-CRITERIA PROGRAMMING METHODS AND PRODUCTION PLAN OPTIMIZATION PROBLEM SOLVING IN METAL INDUSTRY

    OpenAIRE

    Tunjo Perić; Željko Mandić

    2017-01-01

    This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method) in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained resul...

  15. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  16. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  17. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  18. A review on fuzzy and stochastic extensions of the multi index transportation problem

    Directory of Open Access Journals (Sweden)

    Singh Sungeeta

    2017-01-01

    Full Text Available The classical transportation problem (having source and destination as indices deals with the objective of minimizing a single criterion, i.e. cost of transporting a commodity. Additional indices such as commodities and modes of transport led to the Multi Index transportation problem. An additional fixed cost, independent of the units transported, led to the Multi Index Fixed Charge transportation problem. Criteria other than cost (such as time, profit etc. led to the Multi Index Bi-criteria transportation problem. The application of fuzzy and stochastic concept in the above transportation problems would enable researchers to not only introduce real life uncertainties but also obtain solutions of these transportation problems. The review article presents an organized study of the Multi Index transportation problem and its fuzzy and stochastic extensions till today, and aims to help researchers working with complex transportation problems.

  19. multi scale analysis of a function by neural networks elementary derivatives functions

    International Nuclear Information System (INIS)

    Chikhi, A.; Gougam, A.; Chafa, F.

    2006-01-01

    Recently, the wavelet network has been introduced as a special neural network supported by the wavelet theory . Such networks constitute a tool for function approximation problems as it has been already proved in reference . Our present work deals with this model, treating a multi scale analysis of a function. We have then used a linear expansion of a given function in wavelets, neglecting the usual translation parameters. We investigate two training operations. The first one consists on an optimization of the output synaptic layer, the second one, optimizing the output function with respect to scale parameters. We notice a temporary merging of the scale parameters leading to some interesting results : new elementary derivatives units emerge, representing a new elementary task, which is the derivative of the output task

  20. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  1. The Cea multi-scale and multi-physics simulation project for nuclear applications

    International Nuclear Information System (INIS)

    Ledermann, P.; Chauliac, C.; Thomas, J.B.

    2005-01-01

    Full text of publication follows. Today numerical modelling is everywhere recognized as an essential tool of capitalization, integration and share of knowledge. For this reason, it becomes the central tool of research. Until now, the Cea developed a set of scientific software allowing to model, in each situation, the operation of whole or part of a nuclear installation and these codes are largely used in nuclear industry. However, for the future, it is essential to aim for a better accuracy, a better control of uncertainties and better performance in computing times. The objective is to obtain validated models allowing accurate predictive calculations for actual complex nuclear problems such as fuel behaviour in accidental situation. This demands to master a large and interactive set of phenomena ranging from nuclear reaction to heat transfer. To this end, Cea, with industrial partners (EDF, Framatome-ANP, ANDRA) has designed an integrated platform of calculation, devoted to the study of nuclear systems, and intended at the same time for industries and scientists. The development of this platform is under way with the start in 2005 of the integrated project NURESIM, with 18 European partners. Improvement is coming not only through a multi-scale description of all phenomena but also through an innovative design approach requiring deep functional analysis which is upstream from the development of the simulation platform itself. In addition, the studies of future nuclear systems are increasingly multidisciplinary (simultaneous modelling of core physics, thermal-hydraulics and fuel behaviour). These multi-physics and multi-scale aspects make mandatory to pay very careful attention to software architecture issues. A global platform is thus developed integrating dedicated specialized platforms: DESCARTES for core physics, NEPTUNE for thermal-hydraulics, PLEIADES for fuel behaviour, SINERGY for materials behaviour under irradiation, ALLIANCES for the performance

  2. Multi scale analysis of ITER pre-compression rings

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ben, E-mail: ben.park@sener.es [SENER Ingeniería y Sistemas S.A., Barcelona (Spain); Foussat, Arnaud [ITER Organization, St. Paul-Lez-Durance (France); Rajainmaki, Hannu [Fusion for Energy, Barcelona (Spain); Knaster, Juan [IFMIF, Aomori (Japan)

    2013-10-15

    Highlights: • A multi-scale analysis approach employing various scales of ABAQUS FEM models have been used to calculate the response and performance of the rings. • We have studied the effects of various defects on the performance of the rings under the operating temperatures and loading that will be applied to the PCRs. • The multi scale analysis results are presented here. -- Abstract: The Pre-compression Rings of ITER (PCRs) represent one of the largest and most highly stressed composite structures ever designed for long term operation at 4 K. Six rings, each 5 m in diameter and 337 mm × 288 mm in cross-section, will be manufactured from S2 fiber-glass/epoxy composite and installed three at the top and three at the bottom of the eighteen D shaped toroidal field (TF) coils to apply a total centripetal pre-load of 70 MN per TF coil. The composite rings will be fabricated with a high content (65% by volume) of S2 fiber-glass in an epoxy resin matrix. During the manufacture emphasis will be placed on obtaining a structure with a very low void content and minimal presence of critical defects, such as delaminations. This paper presents a unified framework for the multi-scale analysis of the composite structure of the PCRs. A multi-scale analysis approach employing various scales of ABAQUS FEM models and other analysis tools have been used to calculate the response and performance of the rings over the design life of the structure. We have studied the effects of various defects on the performance of the rings under the operating temperatures and loading that will be applied to the PCRs. The results are presented here.

  3. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  4. Quantifying restoration effectiveness using multi-scale habitat models: Implications for sage-grouse in the Great Basin

    Science.gov (United States)

    Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth

    2014-01-01

    A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...

  5. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  6. The development and validation of an urbanicity scale in a multi-country study.

    Science.gov (United States)

    Novak, Nicole L; Allender, Steven; Scarborough, Peter; West, Douglas

    2012-07-20

    Although urban residence is consistently identified as one of the primary correlates of non-communicable disease in low- and middle-income countries, it is not clear why or how urban settings predispose individuals and populations to non-communicable disease (NCD), or how this relationship could be modified to slow the spread of NCD. The urban-rural dichotomy used in most population health research lacks the nuance and specificity necessary to understand the complex relationship between urbanicity and NCD risk. Previous studies have developed and validated quantitative tools to measure urbanicity continuously along several dimensions but all have been isolated to a single country. The purposes of this study were 1) To assess the feasibility and validity of a multi-country urbanicity scale; 2) To report some of the considerations that arise in applying such a scale in different countries; and, 3) To assess how this scale compares with previously validated scales of urbanicity. Household and community-level data from the Young Lives longitudinal study of childhood poverty in 59 communities in Ethiopia, India and Peru collected in 2006/2007 were used. Household-level data include parents' occupations and education level, household possessions and access to resources. Community-level data include population size, availability of health facilities and types of roads. Variables were selected for inclusion in the urbanicity scale based on inspection of the data and a review of literature on urbanicity and health. Seven domains were constructed within the scale: Population Size, Economic Activity, Built Environment, Communication, Education, Diversity and Health Services. The scale ranged from 11 to 61 (mean 35) with significant between country differences in mean urbanicity; Ethiopia (30.7), India (33.2), Peru (39.4). Construct validity was supported by factor analysis and high corrected item-scale correlations suggest good internal consistency. High agreement was

  7. Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392

    International Nuclear Information System (INIS)

    Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep

    2015-01-01

    Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)

  8. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  9. Multi-scale graph-cut algorithm for efficient water-fat separation.

    Science.gov (United States)

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Chinese Medical Question Answer Matching Using End-to-End Character-Level Multi-Scale CNNs

    Directory of Open Access Journals (Sweden)

    Sheng Zhang

    2017-07-01

    Full Text Available This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%.

  11. Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes

    Science.gov (United States)

    Mitra, Sumit

    With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with

  12. The development and validation of an urbanicity scale in a multi-country study

    Directory of Open Access Journals (Sweden)

    Novak Nicole L

    2012-07-01

    Full Text Available Abstract Background Although urban residence is consistently identified as one of the primary correlates of non-communicable disease in low- and middle-income countries, it is not clear why or how urban settings predispose individuals and populations to non-communicable disease (NCD, or how this relationship could be modified to slow the spread of NCD. The urban–rural dichotomy used in most population health research lacks the nuance and specificity necessary to understand the complex relationship between urbanicity and NCD risk. Previous studies have developed and validated quantitative tools to measure urbanicity continuously along several dimensions but all have been isolated to a single country. The purposes of this study were 1 To assess the feasibility and validity of a multi-country urbanicity scale; 2 To report some of the considerations that arise in applying such a scale in different countries; and, 3 To assess how this scale compares with previously validated scales of urbanicity. Methods Household and community-level data from the Young Lives longitudinal study of childhood poverty in 59 communities in Ethiopia, India and Peru collected in 2006/2007 were used. Household-level data include parents’ occupations and education level, household possessions and access to resources. Community-level data include population size, availability of health facilities and types of roads. Variables were selected for inclusion in the urbanicity scale based on inspection of the data and a review of literature on urbanicity and health. Seven domains were constructed within the scale: Population Size, Economic Activity, Built Environment, Communication, Education, Diversity and Health Services. Results The scale ranged from 11 to 61 (mean 35 with significant between country differences in mean urbanicity; Ethiopia (30.7, India (33.2, Peru (39.4. Construct validity was supported by factor analysis and high corrected item-scale correlations suggest

  13. MUSIC: MUlti-Scale Initial Conditions

    Science.gov (United States)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  14. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng

    2018-01-28

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  16. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng; Xu, Huijuan; Saenko, Kate; Ghanem, Bernard

    2018-01-01

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  17. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  18. Efficient exact optimization of multi-objective redundancy allocation problems in series-parallel systems

    International Nuclear Information System (INIS)

    Cao, Dingzhou; Murat, Alper; Chinnam, Ratna Babu

    2013-01-01

    This paper proposes a decomposition-based approach to exactly solve the multi-objective Redundancy Allocation Problem for series-parallel systems. Redundancy allocation problem is a form of reliability optimization and has been the subject of many prior studies. The majority of these earlier studies treat redundancy allocation problem as a single objective problem maximizing the system reliability or minimizing the cost given certain constraints. The few studies that treated redundancy allocation problem as a multi-objective optimization problem relied on meta-heuristic solution approaches. However, meta-heuristic approaches have significant limitations: they do not guarantee that Pareto points are optimal and, more importantly, they may not identify all the Pareto-optimal points. In this paper, we treat redundancy allocation problem as a multi-objective problem, as is typical in practice. We decompose the original problem into several multi-objective sub-problems, efficiently and exactly solve sub-problems, and then systematically combine the solutions. The decomposition-based approach can efficiently generate all the Pareto-optimal solutions for redundancy allocation problems. Experimental results demonstrate the effectiveness and efficiency of the proposed method over meta-heuristic methods on a numerical example taken from the literature.

  19. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  20. Dry Port Location Problem: A Hybrid Multi-Criteria Approach

    Directory of Open Access Journals (Sweden)

    BENTALEB Fatimazahra

    2016-03-01

    Full Text Available Choosing a location for a dry port is a problem which becomes more essential and crucial. This study deals with the problem of locating dry ports. On this matter, a model combining multi-criteria (MACBETH and mono-criteria (BARYCENTER methods to find a solution to dry port location problem has been proposed. In the first phase, a systematic literature review was carried out on dry port location problem and then a methodological classification was presented for this research. In the second phase, a hybrid multi-criteria approach was developed in order to determine the best dry port location taking different criteria into account. A Computational practice and a qualitative analysis from a case study in the Moroccan context have been provided. The results show that the optimal location is very convenient with the geographical region and the government policies.

  1. Superhydrophobic multi-scale ZnO nanostructures fabricated by chemical vapor deposition method.

    Science.gov (United States)

    Zhou, Ming; Feng, Chengheng; Wu, Chunxia; Ma, Weiwei; Cai, Lan

    2009-07-01

    The ZnO nanostructures were synthesized on Si(100) substrates by chemical vapor deposition (CVD) method. Different Morphologies of ZnO nanostructures, such as nanoparticle film, micro-pillar and micro-nano multi-structure, were obtained with different conditions. The results of XRD and TEM showed the good quality of ZnO crystal growth. Selected area electron diffraction analysis indicates the individual nano-wire is single crystal. The wettability of ZnO was studied by contact angle admeasuring apparatus. We found that the wettability can be changed from hydrophobic to super-hydrophobic when the structure changed from smooth particle film to single micro-pillar, nano-wire and micro-nano multi-scale structure. Compared with the particle film with contact angle (CA) of 90.7 degrees, the CA of single scale microstructure and sparse micro-nano multi-scale structure is 130-140 degrees, 140-150 degrees respectively. But when the surface is dense micro-nano multi-scale structure such as nano-lawn, the CA can reach to 168.2 degrees . The results indicate that microstructure of surface is very important to the surface wettability. The wettability on the micro-nano multi-structure is better than single-scale structure, and that of dense micro-nano multi-structure is better than sparse multi-structure.

  2. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  3. Multi-scale connectivity and graph theory highlight critical areas for conservation under climate change

    Science.gov (United States)

    Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.

    2016-01-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.

  4. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  5. An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.

    Science.gov (United States)

    Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P

    2013-01-01

    The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.

  6. Multi-scale Fully Convolutional Network for Face Detection in the Wild

    KAUST Repository

    Bai, Yancheng

    2017-08-24

    Face detection is a classical problem in computer vision. It is still a difficult task due to many nuisances that naturally occur in the wild. In this paper, we propose a multi-scale fully convolutional network for face detection. To reduce computation, the intermediate convolutional feature maps (conv) are shared by every scale model. We up-sample and down-sample the final conv map to approximate K levels of a feature pyramid, leading to a wide range of face scales that can be detected. At each feature pyramid level, a FCN is trained end-to-end to deal with faces in a small range of scale change. Because of the up-sampling, our method can detect very small faces (10×10 pixels). We test our MS-FCN detector on four public face detection datasets, including FDDB, WIDER FACE, AFW and PASCAL FACE. Extensive experiments show that it outperforms state-of-the-art methods. Also, MS-FCN runs at 23 FPS on a GPU for images of size 640×480 with no assumption on the minimum detectable face size.

  7. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  8. Multi-Scale Pattern Recognition for Image Classification and Segmentation

    NARCIS (Netherlands)

    Li, Y.

    2013-01-01

    Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basis

  9. Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Nicholas S. Flann

    2013-09-01

    Full Text Available The Quantitative Trait Loci (QTL mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.

  10. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    The Nuclear Science Committee (NSC) of the Nuclear Energy Agency (NEA) has undertaken an ambitious programme to document state-of-the-art of modelling for nuclear fuels and structural materials. The project is being performed under the Working Party on Multi-Scale Modelling of Fuels and Structural Material for Nuclear Systems (WPMM), which has been established to assess the scientific and engineering aspects of fuels and structural materials, describing multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation and related topics. It also provides member countries with up-to-date information, shared data, models, and expertise. The goal is also to assess needs for improvement and address them by initiating joint efforts. The WPMM reviews and evaluates multi-scale modelling and simulation techniques currently employed in the selection of materials used in nuclear systems. It serves to provide advice to the nuclear community on the developments needed to meet the requirements of modelling for the design of different nuclear systems. The original WPMM mandate had three components (Figure 1), with the first component currently completed, delivering a report on the state-of-the-art of modelling of structural materials. The work on modelling was performed by three expert groups, one each on Multi-Scale Modelling Methods (M3), Multi-Scale Modelling of Fuels (M2F) and Structural Materials Modelling (SMM). WPMM is now composed of three expert groups and two task forces providing contributions on multi-scale methods, modelling of fuels and modelling of structural materials. This structure will be retained, with the addition of task forces as new topics are developed. The mandate of the Expert Group on Multi-Scale Modelling of

  11. Multi-function nuclear weight scale system

    International Nuclear Information System (INIS)

    Zheng Mingquan; Sun Jinhua; Jia Changchun; Wang Mingqian; Tang Ke

    1998-01-01

    The author introduces the methods to contrive the hardware and software of a Multi-function Nuclear Weight Scale System based on the communication contract in compliance with RS485 between a master (industrial control computer 386) and a slave (single chip 8098) and its main functions

  12. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  13. Multi-scale and multi-domain computational astrophysics.

    Science.gov (United States)

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  15. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  16. The family mass hierarchy problem in bosonic technicolor

    International Nuclear Information System (INIS)

    Kagan, A.; Samuel, S.

    1990-01-01

    We use a multiple Higgs system to analyze the family mass hierarchy problem in bosonic technicolor. Dependence on a wide range of Yukawa couplings, λ, for quark and lepton mass generation is greatly reduced, i.e., λ ≅ 0.1 to 1. Third and second generation masses are produced at tree-level, the latter via a see-saw mechanism. We use radiative corrections as a source for many mixing angles and first generation masses. A hierarchy of family masses with small of-diagonal Kobayashi-Maskawa entries naturally arises. A higher scale of 1-10 TeV for Higgs masses and supersymmetry breaking is needed to alleviate difficulties with flavor-changing effects. Such a large scale is a feature of bosonic technicolor and no fine-tuning is required to obtain electroweak breaking at ≅ 100 GeV. Bosonic technicolor is therefore a natural framework for multi-Higgs systems. (orig.)

  17. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Multi-scale simulation of droplet-droplet interactions and coalescence

    CSIR Research Space (South Africa)

    Musehane, Ndivhuwo M

    2016-10-01

    Full Text Available Conference on Computational and Applied Mechanics Potchefstroom 3–5 October 2016 Multi-scale simulation of droplet-droplet interactions and coalescence 1,2Ndivhuwo M. Musehane?, 1Oliver F. Oxtoby and 2Daya B. Reddy 1. Aeronautic Systems, Council... topology changes that result when droplets interact. This work endeavours to eliminate the need to use empirical correlations based on phenomenological models by developing a multi-scale model that predicts the outcome of a collision between droplets from...

  19. Addressing the multi-scale lapsus of landscape : multi-scale landscape process modelling to support sustainable land use : a case study for the Lower Guadalhorce valley South Spain

    NARCIS (Netherlands)

    Schoorl, J.M.

    2002-01-01

    "Addressing the Multi-scale Lapsus of Landscape" with the sub-title "Multi-scale landscape process modelling to support sustainable land use: A case study for the Lower Guadalhorce valley South Spain" focuses on the role of

  20. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  1. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  2. Multiple utility constrained multi-objective programs using Bayesian theory

    Science.gov (United States)

    Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed

    2018-03-01

    A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.

  3. Development of multi-dimensional body image scale for malaysian female adolescents.

    Science.gov (United States)

    Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.

  4. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  5. A parallel algorithm for solving linear equations arising from one-dimensional network problems

    International Nuclear Information System (INIS)

    Mesina, G.L.

    1991-01-01

    One-dimensional (1-D) network problems, such as those arising from 1- D fluid simulations and electrical circuitry, produce systems of sparse linear equations which are nearly tridiagonal and contain a few non-zero entries outside the tridiagonal. Most direct solution techniques for such problems either do not take advantage of the special structure of the matrix or do not fully utilize parallel computer architectures. We describe a new parallel direct linear equation solution algorithm, called TRBR, which is especially designed to take advantage of this structure on MIMD shared memory machines. The new method belongs to a family of methods which split the coefficient matrix into the sum of a tridiagonal matrix T and a matrix comprised of the remaining coefficients R. Efficient tridiagonal methods are used to algebraically simplify the linear system. A smaller auxiliary subsystem is created and solved and its solution is used to calculate the solution of the original system. The newly devised BR method solves the subsystem. The serial and parallel operation counts are given for the new method and related earlier methods. TRBR is shown to have the smallest operation count in this class of direct methods. Numerical results are given. Although the algorithm is designed for one-dimensional networks, it has been applied successfully to three-dimensional problems as well. 20 refs., 2 figs., 4 tabs

  6. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  7. Hierarchical multi-scale classification of nearshore aquatic habitats of the Great Lakes: Western Lake Erie

    Science.gov (United States)

    McKenna, J.E.; Castiglione, C.

    2010-01-01

    Classification is a valuable conservation tool for examining natural resource status and problems and is being developed for coastal aquatic habitats. We present an objective, multi-scale hydrospatial framework for nearshore areas of the Great Lakes. The hydrospatial framework consists of spatial units at eight hierarchical scales from the North American Continent to the individual 270-m spatial cell. Characterization of spatial units based on fish abundance and diversity provides a fish-guided classification of aquatic areas at each spatial scale and demonstrates how classifications may be generated from that framework. Those classification units then provide information about habitat, as well as biotic conditions, which can be compared, contrasted, and hierarchically related spatially. Examples within several representative coastal or open water zones of the Western Lake Erie pilot area highlight potential application of this classification system to management problems. This classification system can assist natural resource managers with planning and establishing priorities for aquatic habitat protection, developing rehabilitation strategies, or identifying special management actions.

  8. Multi-scale graphene patterns on arbitrary substrates via laser-assisted transfer-printing process

    KAUST Repository

    Park, J. B.

    2012-01-01

    A laser-assisted transfer-printing process is developed for multi-scale graphene patterns on arbitrary substrates using femtosecond laser scanning on a graphene/metal substrate and transfer techniques without using multi-step patterning processes. The short pulse nature of a femtosecond laser on a graphene/copper sheet enables fabrication of high-resolution graphene patterns. Thanks to the scale up, fast, direct writing, multi-scale with high resolution, and reliable process characteristics, it can be an alternative pathway to the multi-step photolithography methods for printing arbitrary graphene patterns on desired substrates. We also demonstrate transparent strain devices without expensive photomasks and multi-step patterning process. © 2012 American Institute of Physics.

  9. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  10. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  11. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  12. Multi-scale image segmentation method with visual saliency constraints and its application

    Science.gov (United States)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works

  13. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  14. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  15. The dynamic multi-period vehicle routing problem

    DEFF Research Database (Denmark)

    Wen, Min; Cordeau, Jean-Francois; Laporte, Gilbert

    2010-01-01

    are to minimize total travel costs and customer waiting, and to balance the daily workload over the planning horizon. This problem originates from a large distributor operating in Sweden. It is modeled as a mixed integer linear program, and solved by means of a three-phase heuristic that works over a rolling...... planning horizon. The multi-objective aspect of the problem is handled through a scalar technique approach. Computational results show that the proposed approach can yield high quality solutions within reasonable running times....

  16. Algorithms and ordering heuristics for distributed constraint satisfaction problems

    CERN Document Server

    Wahbi , Mohamed

    2013-01-01

    DisCSP (Distributed Constraint Satisfaction Problem) is a general framework for solving distributed problems arising in Distributed Artificial Intelligence.A wide variety of problems in artificial intelligence are solved using the constraint satisfaction problem paradigm. However, there are several applications in multi-agent coordination that are of a distributed nature. In this type of application, the knowledge about the problem, that is, variables and constraints, may be logically or geographically distributed among physical distributed agents. This distribution is mainly due to p

  17. Improved genome-scale multi-target virtual screening via a novel collaborative filtering approach to cold-start problem.

    Science.gov (United States)

    Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar

    2016-12-13

    Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.

  18. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    International Nuclear Information System (INIS)

    Yadigaroglu, G.

    2005-01-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods

  19. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    Energy Technology Data Exchange (ETDEWEB)

    Yadigaroglu, G. [Swiss Federal Institute of Technology-Zurich (ETHZ), Nuclear Engineering Laboratory, ETH-Zentrum, CLT CH-8092 Zurich (Switzerland)]. E-mail: yadi@ethz.ch

    2005-02-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods.

  20. A multi-scale problem arising in a model of avian flu virus in a seabird colony

    International Nuclear Information System (INIS)

    Clancy, C F; O'Callaghan, M J A; Kelly, T C

    2006-01-01

    Understanding the dynamics of epidemics of novel pathogens such as the H5N1 strain of avian influenza is of crucial importance to public and veterinary health as well as wildlife ecology. We model the effect of a new virus on a seabird colony, where no pre-existing Herd Immunity exists. The seabirds in question are so-called K-strategists, i.e. they have a relatively long life expectancy and very low reproductive output. They live in isolated colonies which typically contain tens of thousands of birds. These densely populated colonies, with so many birds competing for nesting space, would seem to provide perfect conditions for the entry and spread of an infection. Yet there are relatively few reported cases of epidemics among these seabirds. We develop a SEIR model which incorporates some of the unusual features of seabird population biology and examine the effects of introducing a pathogen into the colony

  1. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    International Nuclear Information System (INIS)

    Downar, Thomas; Seker, Volkan

    2013-01-01

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  2. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  3. On the Uniqueness of Solutions of a Nonlinear Elliptic Problem Arising in the Confinement of a Plasma in a Stellarator Device

    International Nuclear Information System (INIS)

    Diaz, J. I.; Galiano, G.; Padial, J. F.

    1999-01-01

    We study the uniqueness of solutions of a semilinear elliptic problem obtained from an inverse formulation when the nonlinear terms of the equation are prescribed in a general class of real functions. The inverse problem arises in the modeling of the magnetic confinement of a plasma in a Stellarator device. The uniqueness proof relies on an L ∞ -estimate on the solution of an auxiliary nonlocal problem formulated in terms of the relative rearrangement of a datum with respect to the solution

  4. A new model and simple algorithms for multi-label mumford-shah problems

    KAUST Repository

    Hong, Byungwoo

    2013-06-01

    In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.

  5. The Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Denis Pinha

    2016-11-01

    Full Text Available This paper presents the formulation and solution of the Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem. The focus of the proposed method is not on finding a single optimal solution, instead on presenting multiple feasible solutions, with cost and duration information to the project manager. The motivation for developing such an approach is due in part to practical situations where the definition of optimal changes on a regular basis. The proposed approach empowers the project manager to determine what is optimal, on a given day, under the current constraints, such as, change of priorities, lack of skilled worker. The proposed method utilizes a simulation approach to determine feasible solutions, under the current constraints. Resources can be non-consumable, consumable, or doubly constrained. The paper also presents a real-life case study dealing with scheduling of ship repair activities.

  6. PENYELESAIAN MULTI TRAVELING SALESMAN PROBLEM DENGAN ALGORITMA GENETIKA

    Directory of Open Access Journals (Sweden)

    NI KADEK MAYULIANA

    2017-01-01

    Full Text Available Genetic algorithm is a part of heuristic algorithm which can be applied to solve various computational problems. This work is directed to study the performance of the genetic algorithm (GA to solve Multi Traveling Salesmen Problem (multi-TSP. GA is simulated to determine the shortest route for 5 to 10 salesmen who travelled 10 to 30 cities. The performance of this algorithm is studied based on the minimum distance and the processing time required for 10 repetitions for each of cities-salesmen combination. The result showed that the minimum distance and the processing time of the GA increase consistently whenever the number of cities to visit increase. In addition, different number of sales who visited certain number of cities proved significantly affect the running time of GA, but did not prove significantly affect the minimum distance.

  7. Data fusion of multi-scale representations for structural damage detection

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  8. Numeric treatment of nonlinear second order multi-point boundary value problems using ANN, GAs and sequential quadratic programming technique

    Directory of Open Access Journals (Sweden)

    Zulqurnain Sabir

    2014-06-01

    Full Text Available In this paper, computational intelligence technique are presented for solving multi-point nonlinear boundary value problems based on artificial neural networks, evolutionary computing approach, and active-set technique. The neural network is to provide convenient methods for obtaining useful model based on unsupervised error for the differential equations. The motivation for presenting this work comes actually from the aim of introducing a reliable framework that combines the powerful features of ANN optimized with soft computing frameworks to cope with such challenging system. The applicability and reliability of such methods have been monitored thoroughly for various boundary value problems arises in science, engineering and biotechnology as well. Comprehensive numerical experimentations have been performed to validate the accuracy, convergence, and robustness of the designed scheme. Comparative studies have also been made with available standard solution to analyze the correctness of the proposed scheme.

  9. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    Science.gov (United States)

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  10. The multi-depot electric vehicle location routing problem with time windows

    Directory of Open Access Journals (Sweden)

    Juan Camilo Paz

    2018-01-01

    Full Text Available In this paper, the Multi-Depot Electric Vehicle Location Routing Problem with Time Windows (MDVLRP is addressed. This problem is an extension of the MDVLRP, where electric vehicles are used instead of internal combustion engine vehicles. The recent development of this model is explained by the advantages of this technology, such as the diminution of carbon dioxide emissions, and the support that they can provide to the design of the logistic and energy-support structure of electric vehicle fleets. There are many models that extend the classical VRP model to take electric vehicles into consideration, but the multi-depot case for location-routing models has not been worked out yet. Moreover, we consider the availability of two energy supply technologies: the “Plug-in” Conventional Charge technology, and Battery Swapping Stations; options in which the recharging time is a function of the amount of energy to charge and a fixed time, respectively. Three models are proposed: one for each of the technologies mentioned above, and another in which both options are taken in consideration. The models were solved for small scale instances using C++ and Cplex 12.5. The results show that the models can be used to design logistic and energy-support structures, and compare the performance of the different options of energy supply, as well as measure the impact of these decisions on the overall distance traveled or other optimization objectives that could be worked on in the future.

  11. Toward a global multi-scale heliophysics observatory

    Science.gov (United States)

    Semeter, J. L.

    2017-12-01

    We live within the only known stellar-planetary system that supports life. What we learn about this system is not only relevant to human society and its expanding reach beyond Earth's surface, but also to our understanding of the origins and evolution of life in the universe. Heliophysics is focused on solar-terrestrial interactions mediated by the magnetic and plasma environment surrounding the planet. A defining feature of energy flow through this environment is interaction across physical scales. A solar disturbance aimed at Earth can excite geospace variability on scales ranging from thousands of kilometers (e.g., global convection, region 1 and 2 currents, electrojet intensifications) to 10's of meters (e.g., equatorial spread-F, dispersive Alfven waves, plasma instabilities). Most "geospace observatory" concepts are focused on a single modality (e.g., HF/UHF radar, magnetometer, optical) providing a limited parameter set over a particular spatiotemporal resolution. Data assimilation methods have been developed to couple heterogeneous and distributed observations, but resolution has typically been prescribed a-priori and according to physical assumptions. This paper develops a conceptual framework for the next generation multi-scale heliophysics observatory, capable of revealing and quantifying the complete spectrum of cross-scale interactions occurring globally within the geospace system. The envisioned concept leverages existing assets, enlists citizen scientists, and exploits low-cost access to the geospace environment. Examples are presented where distributed multi-scale observations have resulted in substantial new insight into the inner workings of our stellar-planetary system.

  12. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  13. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo

    2014-01-01

    that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper

  14. Seesaw induced electroweak scale, the hierarchy problem and sub-eV neutrino masses

    International Nuclear Information System (INIS)

    Atwood, D.; Bar-Shalom, S.; Soni, A.

    2006-01-01

    We describe a model for the scalar sector where all interactions occur either at an ultra-high scale, Λ U ∝10 16 -10 19 GeV, or at an intermediate scale, Λ I =10 9 -10 11 GeV. The interaction of physics on these two scales results in an SU(2) Higgs condensate at the electroweak (EW) scale, Λ EW , through a seesaw-like Higgs mechanism, Λ EW ∝Λ I 2 /Λ U , while the breaking of the SM SU(2) x U(1) gauge symmetry occurs at the intermediate scale Λ I . The EW scale is, therefore, not fundamental but is naturally generated in terms of ultra-high energy phenomena and so the hierarchy problem is alleviated. We show that the class of such ''seesaw Higgs'' models predict the existence of sub-eV neutrino masses which are generated through a ''two-step'' seesaw mechanism in terms of the same two ultra-high scales: m ν ∝Λ I 4 /Λ U 3 ∝Λ EW 2 /Λ U . The neutrinos can be either Dirac or Majorana, depending on the structure of the scalar potential. We also show that our seesaw Higgs model can be naturally embedded in theories with tiny extra dimensions of size R∝Λ U -1 ∝10 -16 fm, where the seesaw induced EW scale arises from a violation of a symmetry at a distant brane; in particular, in the scenario presented there are seven tiny extra dimensions. (orig.)

  15. Multi-scale evaluations of submarine groundwater discharge

    Directory of Open Access Journals (Sweden)

    M. Taniguchi

    2015-03-01

    Full Text Available Multi-scale evaluations of submarine groundwater discharge (SGD have been made in Saijo, Ehime Prefecture, Shikoku Island, Japan, by using seepage meters for point scale, 222Rn tracer for point and coastal scales, and a numerical groundwater model (SEAWAT for coastal and basin scales. Daily basis temporal changes in SGD are evaluated by continuous seepage meter and 222Rn mooring measurements, and depend on sea level changes. Spatial evaluations of SGD were also made by 222Rn along the coast in July 2010 and November 2011. The area with larger 222Rn concentration during both seasons agreed well with the area with larger SGD calculated by 3D groundwater numerical simulations.

  16. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  17. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  18. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  19. Solving implicit multi-mesh flow and conjugate heat transfer problems with RELAP-7

    International Nuclear Information System (INIS)

    Zou, L.; Peterson, J.; Zhao, H.; Zhang, H.; Andrs, D.; Martineau, R.

    2013-01-01

    The fully implicit simulation capability of RELAP-7 to solve multi-mesh flow and conjugate heat transfer problems for reactor system safety analysis is presented. Compared to general single-mesh simulations, the reactor system safety analysis-type of code has unique challenges due to its highly simplified, interconnected, one-dimensional, and zero-dimensional flow network describing multiple physics with significantly different time and length scales. To use the Jacobian-free Newton Krylov-type of solver, preconditioning is generally required for the Krylov method. The uniqueness of the reactor safety analysis-type of code in treating the interconnected flow network and conjugate heat transfer also introduces challenges in providing preconditioning matrix. Typical flow and conjugate heat transfer problems involved in reactor safety analysis using RELAP-7, as well as the special treatment on the preconditioning matrix are presented in detail. (authors)

  20. FGP Approach for Solving Multi-level Multi-objective Quadratic Fractional Programming Problem with Fuzzy parameters

    Directory of Open Access Journals (Sweden)

    m. s. osman

    2017-09-01

    Full Text Available In this paper, we consider fuzzy goal programming (FGP approach for solving multi-level multi-objective quadratic fractional programming (ML-MOQFP problem with fuzzy parameters in the constraints. Firstly, the concept of the ?-cut approach is applied to transform the set of fuzzy constraints into a common deterministic one. Then, the quadratic fractional objective functions in each level are transformed into quadratic objective functions based on a proposed transformation. Secondly, the FGP approach is utilized to obtain a compromise solution for the ML-MOQFP problem by minimizing the sum of the negative deviational variables. Finally, an illustrative numerical example is given to demonstrate the applicability and performance of the proposed approach.

  1. Multi-scale and multi-physics model of the uterine smooth muscle with mechanotransduction.

    Science.gov (United States)

    Yochum, Maxime; Laforêt, Jérémy; Marque, Catherine

    2018-02-01

    Preterm labor is an important public health problem. However, the efficiency of the uterine muscle during labor is complex and still poorly understood. This work is a first step towards a model of the uterine muscle, including its electrical and mechanical components, to reach a better understanding of the uterus synchronization. This model is proposed to investigate, by simulation, the possible role of mechanotransduction for the global synchronization of the uterus. The electrical diffusion indeed explains the local propagation of contractile activity, while the tissue stretching may play a role in the synchronization of distant parts of the uterine muscle. This work proposes a multi-physics (electrical, mechanical) and multi-scales (cell, tissue, whole uterus) model, which is applied to a realistic uterus 3D mesh. This model includes electrical components at different scales: generation of action potentials at the cell level, electrical diffusion at the tissue level. It then links these electrical events to the mechanical behavior, at the cellular level (via the intracellular calcium concentration), by simulating the force generated by each active cell. It thus computes an estimation of the intra uterine pressure (IUP) by integrating the forces generated by each active cell at the whole uterine level, as well as the stretching of the tissue (by using a viscoelastic law for the behavior of the tissue). It finally includes at the cellular level stretch activated channels (SACs) that permit to create a loop between the mechanical and the electrical behavior (mechanotransduction). The simulation of different activated regions of the uterus, which in this first "proof of concept" case are electrically isolated, permits the activation of inactive regions through the stretching (induced by the electrically active regions) computed at the whole organ scale. This permits us to evidence the role of the mechanotransduction in the global synchronization of the uterus. The

  2. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  3. Multi-Time Scale Control of Demand Flexibility in Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Bishnu P. Bhattarai

    2017-01-01

    Full Text Available This paper presents a multi-timescale control strategy to deploy electric vehicle (EV demand flexibility for simultaneously providing power balancing, grid congestion management, and economic benefits to participating actors. First, an EV charging problem is investigated from consumer, aggregator, and distribution system operator’s perspectives. A hierarchical control architecture (HCA comprising scheduling, coordinative, and adaptive layers is then designed to realize their coordinative goal. This is realized by integrating multi-time scale controls that work from a day-ahead scheduling up to real-time adaptive control. The performance of the developed method is investigated with high EV penetration in a typical residential distribution grid. The simulation results demonstrate that HCA efficiently utilizes demand flexibility stemming from EVs to solve grid unbalancing and congestions with simultaneous maximization of economic benefits to the participating actors. This is ensured by enabling EV participation in day-ahead, balancing, and regulation markets. For the given network configuration and pricing structure, HCA ensures the EV owners to get paid up to five times the cost they were paying without control.

  4. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    Science.gov (United States)

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  5. visPIG--a web tool for producing multi-region, multi-track, multi-scale plots of genetic data.

    Directory of Open Access Journals (Sweden)

    Matthew Scales

    Full Text Available We present VISual Plotting Interface for Genetics (visPIG; http://vispig.icr.ac.uk, a web application to produce multi-track, multi-scale, multi-region plots of genetic data. visPIG has been designed to allow users not well versed with mathematical software packages and/or programming languages such as R, Matlab®, Python, etc., to integrate data from multiple sources for interpretation and to easily create publication-ready figures. While web tools such as the UCSC Genome Browser or the WashU Epigenome Browser allow custom data uploads, such tools are primarily designed for data exploration. This is also true for the desktop-run Integrative Genomics Viewer (IGV. Other locally run data visualisation software such as Circos require significant computer skills of the user. The visPIG web application is a menu-based interface that allows users to upload custom data tracks and set track-specific parameters. Figures can be downloaded as PDF or PNG files. For sensitive data, the underlying R code can also be downloaded and run locally. visPIG is multi-track: it can display many different data types (e.g association, functional annotation, intensity, interaction, heat map data,…. It also allows annotation of genes and other custom features in the plotted region(s. Data tracks can be plotted individually or on a single figure. visPIG is multi-region: it supports plotting multiple regions, be they kilo- or megabases apart or even on different chromosomes. Finally, visPIG is multi-scale: a sub-region of particular interest can be 'zoomed' in. We describe the various features of visPIG and illustrate its utility with examples. visPIG is freely available through http://vispig.icr.ac.uk under a GNU General Public License (GPLv3.

  6. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    Energy Technology Data Exchange (ETDEWEB)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish; Gandikota, Imtiaz; Savic, Vesna; Sun, Xin; Choi, Kyoo Sil; Hu, Xiaohua; Pourboghrat, F.; Park, Taejoon; Mapar, Aboozar; Kumar, Shavan; Ghassemi-Armaki, Hassan; Abu-Farha, Fadi

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project to develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive

  7. Bernoulli Variational Problem and Beyond

    KAUST Repository

    Lorz, Alexander

    2013-12-17

    The question of \\'cutting the tail\\' of the solution of an elliptic equation arises naturally in several contexts and leads to a singular perturbation problem under the form of a strong cut-off. We consider both the PDE with a drift and the symmetric case where a variational problem can be stated. It is known that, in both cases, the same critical scale arises for the size of the singular perturbation. More interesting is that in both cases another critical parameter (of order one) arises that decides when the limiting behaviour is non-degenerate. We study both theoretically and numerically the values of this critical parameter and, in the symmetric case, ask if the variational solution leads to the same value as for the maximal solution of the PDE. Finally we propose a weak formulation of the limiting Bernoulli problem which incorporates both Dirichlet and Neumann boundary condition. © 2013 Springer-Verlag Berlin Heidelberg.

  8. PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.

    AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.

  9. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  10. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    International Nuclear Information System (INIS)

    Arsenault, Louis-François; Millis, Andrew J; Neuberg, Richard; Hannah, Lauren A

    2017-01-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved. (paper)

  11. Scaling of Attitudes Toward Population Problems

    Science.gov (United States)

    Watkins, George A.

    1975-01-01

    This study related population problem attitudes and socioeconomic variables. Six items concerned with number of children, birth control, family, science, economic depression, and overpopulation were selected for a Guttman scalogram. Education, occupation, and number of children were correlated with population problems scale scores; marital status,…

  12. A multi-scale approach of mechanical and transport properties of cementitious materials under rises of temperature

    International Nuclear Information System (INIS)

    Caratini, G.

    2012-01-01

    The modern industrial activities (storage of nuclear waste, geothermal wells, nuclear power plants,...) can submit cementitious materials to some extreme conditions, for example at temperatures above 200 C. This level of temperature will induce phenomena of dehydration in the cement paste, particularly impacting the CSH hydrates which led to the mechanical cohesion. The effects of these temperatures on the mechanical and transport properties have been the subject of this thesis.To understand these effects, we need to take into account the heterogeneous, porous, multi-scale aspects of these materials. To do this, micro-mechanics and homogenization tools based on the Eshelby problem's solution were used. Moreover, to support this multi-scale modeling, mechanical testing based on the theory of porous media were conducted. The measurements of modulus compressibility, permeability and porosity under confining pressure were used to investigate the mechanisms of degradation of these materials during thermal loads up to 400 C. (author)

  13. Multi-scale modeling of dispersed gas-liquid two-phase flow

    NARCIS (Netherlands)

    Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.

    2004-01-01

    In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the

  14. Some Problems of Industrial Scale-Up.

    Science.gov (United States)

    Jackson, A. T.

    1985-01-01

    Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…

  15. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    Science.gov (United States)

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  16. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    Science.gov (United States)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  17. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  18. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  19. Plant trait detection with multi-scale spectrometry

    Science.gov (United States)

    Gamon, J. A.; Wang, R.

    2017-12-01

    Proximal and remote sensing using imaging spectrometry offers new opportunities for detecting plant traits, with benefits for phenotyping, productivity estimation, stress detection, and biodiversity studies. Using proximal and airborne spectrometry, we evaluated variation in plant optical properties at various spatial and spectral scales with the goal of identifying optimal scales for distinguishing plant traits related to photosynthetic function. Using directed approaches based on physiological vegetation indices, and statistical approaches based on spectral information content, we explored alternate ways of distinguishing plant traits with imaging spectrometry. With both leaf traits and canopy structure contributing to the signals, results exhibit a strong scale dependence. Our results demonstrate the benefits of multi-scale experimental approaches within a clear conceptual framework when applying remote sensing methods to plant trait detection for phenotyping, productivity, and biodiversity studies.

  20. Multi-scale semi-ideal magnetohydrodynamics of a tokamak plasma

    International Nuclear Information System (INIS)

    Bazdenkov, S.; Sato, Tetsuya; Watanabe, Kunihiko.

    1995-09-01

    An analytical model of fast spatial flattening of the toroidal current density and q-profile at the nonlinear stage of (m = 1/n = 1) kink instability of a tokamak plasma is presented. The flattening is shown to be an essentially multi-scale phenomenon which is characterized by, at least, two magnetic Reynolds numbers. The ordinary one, R m , is related with a characteristic radial scale-length, while the other, R m * , corresponds to a characteristic scale-length of plasma inhomogeneity along the magnetic field line. In a highly conducting plasma inside the q = 1 magnetic surface, where q value does not much differ from unity, plasma evolution is governed by a multi-scale non-ideal dynamics characterized by two well-separated magnetic Reynolds numbers, R m and R m * ≡ (1 - q) R m , where R m * - O(1) and R m >> 1. This dynamics consistently explains two seemingly contradictory features recently observed in a numerical simulation [Watanabe et al., 1995]: i) the current profile (q-profile) is flattened in the magnetohydrodynamic time scale within the q = 1 rational surface; ii) the magnetic surface keeps its initial circular shape during this evolution. (author)

  1. Multi-scale modeling and analysis of convective boiling: towards the prediction of CHF in rod bundles

    International Nuclear Information System (INIS)

    Niceno, B.; Sato, Y.; Badillo, A.; Andreani, M.

    2010-01-01

    In this paper we describe current activities on the project Multi-Scale Modeling and Analysis of convective boiling (MSMA), conducted jointly by the Paul Scherrer Institute (PSI) and the Swiss Nuclear Utilities (Swissnuclear). The long-term aim of the MSMA project is to formulate improved closure laws for Computational Fluid Dynamics (CFD) simulations for prediction of convective boiling and eventually of the Critical Heat Flux (CHF). As boiling is controlled by the competition of numerous phenomena at various length and time scales, a multi-scale approach is employed to tackle the problem at different scales. In the MSMA project, the scales on which we focus range from the CFD scale (macro-scale), bubble size scale (meso-scale), liquid micro-layer and triple interline scale (micro-scale), and molecular scale (nano-scale). The current focus of the project is on micro- and meso- scales modeling. The numerical framework comprises a highly efficient, parallel DNS solver, the PSI-BOIL code. The code has incorporated an Immersed Boundary Method (IBM) to tackle complex geometries. For simulation of meso-scales (bubbles), we use the Constrained Interpolation Profile method: Conservative Semi-Lagrangian 2nd order (CIP-CSL2). The phase change is described either by applying conventional jump conditions at the interface, or by using the Phase Field (PF) approach. In this work, we present selected results for flows in complex geometry using the IBM, selected bubbly flow simulations using the CIP-CSL2 method and results for phase change using the PF approach. In the subsequent stage of the project, the importance of effects of nano-scale processes on the global boiling heat transfer will be evaluated. To validate the models, more experimental information will be needed in the future, so it is expected that the MSMA project will become the seed for a long-term, combined theoretical and experimental program

  2. Multi-scale structural community organisation of the human genome.

    Science.gov (United States)

    Boulos, Rasha E; Tremblay, Nicolas; Arneodo, Alain; Borgnat, Pierre; Audit, Benjamin

    2017-04-11

    Structural interaction frequency matrices between all genome loci are now experimentally achievable thanks to high-throughput chromosome conformation capture technologies. This ensues a new methodological challenge for computational biology which consists in objectively extracting from these data the structural motifs characteristic of genome organisation. We deployed the fast multi-scale community mining algorithm based on spectral graph wavelets to characterise the networks of intra-chromosomal interactions in human cell lines. We observed that there exist structural domains of all sizes up to chromosome length and demonstrated that the set of structural communities forms a hierarchy of chromosome segments. Hence, at all scales, chromosome folding predominantly involves interactions between neighbouring sites rather than the formation of links between distant loci. Multi-scale structural decomposition of human chromosomes provides an original framework to question structural organisation and its relationship to functional regulation across the scales. By construction the proposed methodology is independent of the precise assembly of the reference genome and is thus directly applicable to genomes whose assembly is not fully determined.

  3. A novel multi-item joint replenishment problem considering multiple type discounts.

    Directory of Open Access Journals (Sweden)

    Ligang Cui

    Full Text Available In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP model, considering three discount (all-unit discount, incremental discount, total volume discount offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction.

  4. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  5. The 'thousand words' problem: Summarizing multi-dimensional data

    International Nuclear Information System (INIS)

    Scott, David M.

    2011-01-01

    Research highlights: → Sophisticated process sensors produce large multi-dimensional data sets. → Plant control systems cannot handle images or large amounts of data. → Various techniques reduce the dimensionality, extracting information from raw data. → Simple 1D and 2D methods can often be extended to 3D and 4D applications. - Abstract: An inherent difficulty in the application of multi-dimensional sensing to process monitoring and control is the extraction and interpretation of useful information. Ultimately the measured data must be collapsed into a relatively small number of values that capture the salient characteristics of the process. Although multiple dimensions are frequently necessary to isolate a particular physical attribute (such as the distribution of a particular chemical species in a reactor), plant control systems are not equipped to use such data directly. The production of a multi-dimensional data set (often displayed as an image) is not the final step of the measurement process, because information must still be extracted from the raw data. In the metaphor of one picture being equal to a thousand words, the problem becomes one of paraphrasing a lengthy description of the image with one or two well-chosen words. Various approaches to solving this problem are discussed using examples from the fields of particle characterization, image processing, and process tomography.

  6. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  7. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  8. Multi-scale Regions from Edge Fragments

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Andersen, Hans Jørgen

    2014-01-01

    In this article we introduce a novel method for detecting multi-scale salient regions around edges using a graph based image compression algorithm. Images are recursively decomposed into triangles arranged into a binary tree using linear interpolation. The entropy of any local region of the image......), their performance is comparable to SIFT (Lowe, 2004).We also show that when they are used together with MSERs (Matas et al., 2002), the performance of MSERs is boosted....

  9. Multi-grid and ICCG for problems with interfaces

    International Nuclear Information System (INIS)

    Dendy, J.E.; Hyman, J.M.

    1980-01-01

    Computation times for the multi-grid (MG) algorithm, the incomplete Cholesky conjugate gradient (ICCG) algorithm [J. Comp. Phys. 26, 43-65 (1978); Math. Comp. 31, 148-162 (1977)], and the modified ICCG (MICCG) algorithm [BIT 18, 142-156 (1978)] to solve elliptic partial differential equations are compared. The MICCG and ICCG algorithms are more robust than the MG for general positive definite systems. A major advantage of the MG algorithm is that the structure of the problem can be exploited to reduce the solution time significantly. Five example problems are discussed. For problems with little structure and for one-shot calculations ICCG is recommended over MG, and MICCG, over ICCG. For problems that are done many times, it is worth investing the effort to study methods like MG. 1 table

  10. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  11. Measurement, modeling and perception of painted surfaces: A Multi-scale Analysis of the Touch-up Problem

    Science.gov (United States)

    Kalghatgi, Suparna Kishore

    Real-world surfaces typically have geometric features at a range of spatial scales. At the microscale, opaque surfaces are often characterized by bidirectional reflectance distribution functions (BRDF), which describes how a surface scatters incident light. At the mesoscale, surfaces often exhibit visible texture -- stochastic or patterned arrangements of geometric features that provide visual information about surface properties such as roughness, smoothness, softness, etc. These textures also affect how light is scattered by the surface, but the effects are at a different spatial scale than those captured by the BRDF. Through this research, we investigate how microscale and mesoscale surface properties interact to contribute to overall surface appearance. This behavior is also the cause of the well-known "touch-up problem" in the paint industry, where two regions coated with exactly the same paint, look different in color, gloss and/or texture because of differences in application methods. At first, samples were created by applying latex paint to standard wallboard surfaces. Two application methods- spraying and rolling were used. The BRDF and texture properties of the samples were measured, which revealed differences at both the microscale and mesoscale. This data was then used as input for a physically-based image synthesis algorithm, to generate realistic images of the surfaces under different viewing conditions. In order to understand the factors that govern touch-up visibility, psychophysical tests were conducted using calibrated, digital photographs of the samples as stimuli. Images were presented in pairs and a two alternative forced choice design was used for the experiments. These judgments were then used as data for a Thurstonian scaling analysis to produce psychophysical scales of visibility, which helped determine the effect of paint formulation, application methods, and viewing and illumination conditions on the touch-up problem. The results can be

  12. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results

    Science.gov (United States)

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey

    2016-01-01

    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  13. A heuristic for solving the redundancy allocation problem for multi-state series-parallel systems

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose E.; Coit, David W.

    2004-01-01

    The redundancy allocation problem is formulated with the objective of minimizing design cost, when the system exhibits a multi-state reliability behavior, given system-level performance constraints. When the multi-state nature of the system is considered, traditional solution methodologies are no longer valid. This study considers a multi-state series-parallel system (MSPS) with capacitated binary components that can provide different multi-state system performance levels. The different demand levels, which must be supplied during the system-operating period, result in the multi-state nature of the system. The new solution methodology offers several distinct benefits compared to traditional formulations of the MSPS redundancy allocation problem. For some systems, recognizing that different component versions yield different system performance is critical so that the overall system reliability estimation and associated design models the true system reliability behavior more realistically. The MSPS design problem, solved in this study, has been previously analyzed using genetic algorithms (GAs) and the universal generating function. The specific problem being addressed is one where there are multiple component choices, but once a component selection is made, only the same component type can be used to provide redundancy. This is the first time that the MSPS design problem has been addressed without using GAs. The heuristic offers more efficient and straightforward analyses. Solutions to three different problem types are obtained illustrating the simplicity and ease of application of the heuristic without compromising the intended optimization needs

  14. Diagnosing Disaster Resilience of Communities as Multi-scale Complex Socio-ecological Systems

    Science.gov (United States)

    Liu, Wei; Mochizuki, Junko; Keating, Adriana; Mechler, Reinhard; Williges, Keith; Hochrainer, Stefan

    2014-05-01

    Global environmental change, growing anthropogenic influence, and increasing globalisation of society have made it clear that disaster vulnerability and resilience of communities cannot be understood without knowledge on the broader social-ecological system in which they are embedded. We propose a framework for diagnosing community resilience to disasters, as a form of disturbance to social-ecological systems, with feedbacks from the local to the global scale. Inspired by iterative multi-scale analysis employed by Resilience Alliance, the related socio-ecological systems framework of Ostrom, and the sustainable livelihood framework, we developed a multi-tier framework for thinking of communities as multi-scale social-ecological systems and analyzing communities' disaster resilience and also general resilience. We highlight the cross-scale influences and feedbacks on communities that exist from lower (e.g., household) to higher (e.g., regional, national) scales. The conceptual framework is then applied to a real-world resilience assessment situation, to illustrate how key components of socio-ecological systems, including natural hazards, natural and man-made environment, and community capacities can be delineated and analyzed.

  15. Multi-scale structural similarity index for motion detection

    Directory of Open Access Journals (Sweden)

    M. Abdel-Salam Nasr

    2017-07-01

    Full Text Available The most recent approach for measuring the image quality is the structural similarity index (SSI. This paper presents a novel algorithm based on the multi-scale structural similarity index for motion detection (MS-SSIM in videos. The MS-SSIM approach is based on modeling of image luminance, contrast and structure at multiple scales. The MS-SSIM has resulted in much better performance than the single scale SSI approach but at the cost of relatively lower processing speed. The major advantages of the presented algorithm are both: the higher detection accuracy and the quasi real-time processing speed.

  16. Emergence of multi-scaling in fluid turbulence

    Science.gov (United States)

    Donzis, Diego; Yakhot, Victor

    2017-11-01

    We present new theoretical and numerical results on the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (or dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that due to multi-scaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different ``Reynolds numbers'' reflecting a multitude of anomalous scaling exponents. We found that anomalous scaling for high order moments emerges at very low Reynolds numbers implying that intense dissipative-range fluctuations are established at even lower Reynolds number than that required for an inertial range. Thus, our results suggest that information about inertial range dynamics can be obtained from dissipative scales even when the former does not exit. We discuss our further prediction that transition to fully anomalous turbulence disappears at Rλ < 3 . Support from NSF is acknowledged.

  17. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    Science.gov (United States)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  18. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  19. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    Science.gov (United States)

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  20. Multi-scale semi-ideal magnetohydrodynamics of a tokamak plasma

    Energy Technology Data Exchange (ETDEWEB)

    Bazdenkov, S.; Sato, Tetsuya; Watanabe, Kunihiko

    1995-09-01

    An analytical model of fast spatial flattening of the toroidal current density and q-profile at the nonlinear stage of (m = 1/n = 1) kink instability of a tokamak plasma is presented. The flattening is shown to be an essentially multi-scale phenomenon which is characterized by, at least, two magnetic Reynolds numbers. The ordinary one, R{sub m}, is related with a characteristic radial scale-length, while the other, R{sub m}{sup *}, corresponds to a characteristic scale-length of plasma inhomogeneity along the magnetic field line. In a highly conducting plasma inside the q = 1 magnetic surface, where q value does not much differ from unity, plasma evolution is governed by a multi-scale non-ideal dynamics characterized by two well-separated magnetic Reynolds numbers, R{sub m} and R{sub m}{sup *} {identical_to} (1 - q) R{sub m}, where R{sub m}{sup *} - O(1) and R{sub m} >> 1. This dynamics consistently explains two seemingly contradictory features recently observed in a numerical simulation [Watanabe et al., 1995]: (i) the current profile (q-profile) is flattened in the magnetohydrodynamic time scale within the q = 1 rational surface; (ii) the magnetic surface keeps its initial circular shape during this evolution. (author).

  1. Multi-Scale Simulation of High Energy Density Ionic Liquids

    National Research Council Canada - National Science Library

    Voth, Gregory A

    2007-01-01

    The focus of this AFOSR project was the molecular dynamics (MD) simulation of ionic liquid structure, dynamics, and interfacial properties, as well as multi-scale descriptions of these novel liquids (e.g...

  2. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  3. Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging

    Science.gov (United States)

    Lee, Jongpil; Nam, Juhan

    2017-08-01

    Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.

  4. Multi-scale interactions affecting transport, storage, and processing of solutes and sediments in stream corridors (Invited)

    Science.gov (United States)

    Harvey, J. W.; Packman, A. I.

    2010-12-01

    Surface water and groundwater flow interact with the channel geomorphology and sediments in ways that determine how material is transported, stored, and transformed in stream corridors. Solute and sediment transport affect important ecological processes such as carbon and nutrient dynamics and stream metabolism, processes that are fundamental to stream health and function. Many individual mechanisms of transport and storage of solute and sediment have been studied, including surface water exchange between the main channel and side pools, hyporheic flow through shallow and deep subsurface flow paths, and sediment transport during both baseflow and floods. A significant challenge arises from non-linear and scale-dependent transport resulting from natural, fractal fluvial topography and associated broad, multi-scale hydrologic interactions. Connections between processes and linkages across scales are not well understood, imposing significant limitations on system predictability. The whole-stream tracer experimental approach is popular because of the spatial averaging of heterogeneous processes; however the tracer results, implemented alone and analyzed using typical models, cannot usually predict transport beyond the very specific conditions of the experiment. Furthermore, the results of whole stream tracer experiments tend to be biased due to unavoidable limitations associated with sampling frequency, measurement sensitivity, and experiment duration. We recommend that whole-stream tracer additions be augmented with hydraulic and topographic measurements and also with additional tracer measurements made directly in storage zones. We present examples of measurements that encompass interactions across spatial and temporal scales and models that are transferable to a wide range of flow and geomorphic conditions. These results show how the competitive effects between the different forces driving hyporheic flow, operating at different spatial scales, creates a situation

  5. A Multi-scale, Multi-disciplinary Approach for Assessing the Technological, Economic, and Environmental Performance of Bio-based Chemicals

    DEFF Research Database (Denmark)

    Herrgard, Markus; Sukumara, Sumesh; Campodonico Alt, Miguel Angel

    2015-01-01

    , the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain...... towards a sustainable chemical industry....

  6. K-State Problem Identification Rating Scales for College Students

    Science.gov (United States)

    Robertson, John M.; Benton, Stephen L.; Newton, Fred B.; Downey, Ronald G.; Marsh, Patricia A.; Benton, Sheryl A.; Tseng, Wen-Chih; Shin, Kang-Hyun

    2006-01-01

    The K-State Problem Identification Rating Scales, a new screening instrument for college counseling centers, gathers information about clients' presenting symptoms, functioning levels, and readiness to change. Three studies revealed 7 scales: Mood Difficulties, Learning Problems, Food Concerns, Interpersonal Conflicts, Career Uncertainties,…

  7. Comparison of single- and multi-scale models for the prediction of the Culicoides biting midge distribution in Germany

    Directory of Open Access Journals (Sweden)

    Renke Lühken

    2016-05-01

    Full Text Available This study analysed Culicoides presence-absence data from 46 sampling sites in Germany, where monitoring was carried out from April 2007 until May 2008. Culicoides presence-absence data were analysed in relation to land cover data, in order to study whether the prevalence of biting midges is correlated to land cover data with respect to the trapping sites. We differentiated eight scales, i.e. buffer zones with radii of 0.5, 1, 2, 3, 4, 5, 7.5 and 10 km, around each site, and chose several land cover variables. For each species, we built eight single-scale models (i.e. predictor variables from one of the eight scales for each model based on averaged, generalised linear models and two multiscale models (i.e. predictor variables from all of the eight scales based on averaged, generalised linear models and generalised linear models with random forest variable selection. There were no significant differences between performance indicators of models built with land cover data from different buffer zones around the trapping sites. However, the overall performance of multi-scale models was higher than the alternatives. Furthermore, these models mostly achieved the best performance for the different species using the index area under the receiver operating characteristic curve. However, as also presented in this study, the relevance of the different variables could significantly differ between various scales, including the number of species affected and the positive or negative direction. This is an even more severe problem if multi-scale models are concerned, in which one model can have the same variable at different scales but with different directions, i.e. negative and positive direction of the same variable at different scales. However, multi-scale modelling is a promising approach to model the distribution of Culicoides species, accounting much more for the ecology of biting midges, which uses different resources (breeding sites, hosts, etc. at

  8. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  9. The operational flight and multi-crew scheduling problem

    Directory of Open Access Journals (Sweden)

    Stojković Mirela

    2005-01-01

    Full Text Available This paper introduces a new kind of operational multi-crew scheduling problem which consists in simultaneously modifying, as necessary, the existing flight departure times and planned individual work days (duties for the set of crew members, while respecting predefined aircraft itineraries. The splitting of a planned crew is allowed during a day of operations, where it is more important to cover a flight than to keep planned crew members together. The objective is to cover a maximum number of flights from a day of operations while minimizing changes in both the flight schedule and the next-day planned duties for the considered crew members. A new type of the same flight departure time constraints is introduced. They ensure that a flight which belongs to several personalized duties, where the number of duties is equal to the number of crew members assigned to the flight, will have the same departure time in each of these duties. Two variants of the problem are considered. The first variant allows covering of flights by less than the planned number of crew members, while the second one requires covering of flights by a complete crew. The problem is mathematically formulated as an integer nonlinear multi-commodity network flow model with time windows and supplementary constraints. The optimal solution approach is based on Dantzig-Wolfe decomposition/column generation embedded into a branch-and-bound scheme. The resulting computational times on commercial-size problems are very good. Our new simultaneous approach produces solutions whose quality is far better than that of the traditional sequential approach where the flight schedule has been changed first and then input as a fixed data to the crew scheduling problem.

  10. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    Science.gov (United States)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  11. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  12. Decay rate in a multi-dimensional fission problem

    Energy Technology Data Exchange (ETDEWEB)

    Brink, D M; Canto, L F

    1986-06-01

    The multi-dimensional diffusion approach of Zhang Jing Shang and Weidenmueller (1983 Phys. Rev. C28, 2190) is used to study a simplified model for induced fission. In this model it is shown that the coupling of the fission coordinate to the intrinsic degrees of freedom is equivalent to an extra friction and a mass correction in the corresponding one-dimensional problem.

  13. On the use of break quantities in multi--echelon distribution systems

    NARCIS (Netherlands)

    R. Dekker (Rommert); J.B.G. Frenk (Hans); M.J. Kleijn (Marcel); N. Piersma (Nanda); T.G. de Kok (Ton)

    1995-01-01

    textabstractIn multi-echelon distribution systems it is usually assumed that demand is only satisfied from the lowest echelon. In this paper we will consider the case where demand can be satisfied from any level in the system. However, then the problem arises of how to allocate orders from customers

  14. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  15. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Science.gov (United States)

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  16. Boundary Value Problems Arising in Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Sinem Ertürk

    2009-01-01

    Full Text Available The classic Kalman filtering equations for independent and correlated white noises are ordinary differential equations (deterministic or stochastic with the respective initial conditions. Changing the noise processes by taking them to be more realistic wide band noises or delayed white noises creates challenging partial differential equations with initial and boundary conditions. In this paper, we are aimed to give a survey of this connection between Kalman filtering and boundary value problems, bringing them into the attention of mathematicians as well as engineers dealing with Kalman filtering and boundary value problems.

  17. Boundary Value Problems Arising in Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Bashirov Agamirza

    2008-01-01

    Full Text Available The classic Kalman filtering equations for independent and correlated white noises are ordinary differential equations (deterministic or stochastic with the respective initial conditions. Changing the noise processes by taking them to be more realistic wide band noises or delayed white noises creates challenging partial differential equations with initial and boundary conditions. In this paper, we are aimed to give a survey of this connection between Kalman filtering and boundary value problems, bringing them into the attention of mathematicians as well as engineers dealing with Kalman filtering and boundary value problems.

  18. Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation

    International Nuclear Information System (INIS)

    Nnakamachi, Eiji; Kuramae, Hiroyuki; Ngoc Tam, Nguyen; Nakamura, Yasunori; Sakamoto, Hidetoshi; Morimoto, Hideo

    2007-01-01

    In this study, the dynamic- and static-explicit multi-scale finite element (F.E.) codes are developed by employing the homogenization method, the crystalplasticity constitutive equation and SEM-EBSD measurement based polycrystal model. These can predict the crystal morphological change and the hardening evolution at the micro level, and the macroscopic plastic anisotropy evolution. These codes are applied to analyze the asymmetrical rolling process, which is introduced to control the crystal texture of the sheet metal for generating a high formability sheet metal. These codes can predict the yield surface and the sheet formability by analyzing the strain path dependent yield, the simple sheet forming process, such as the limit dome height test and the cylindrical deep drawing problems. It shows that the shear dominant rolling process, such as the asymmetric rolling, generates ''high formability'' textures and eventually the high formability sheet. The texture evolution and the high formability of the newly generated sheet metal experimentally were confirmed by the SEM-EBSD measurement and LDH test. It is concluded that these explicit type crystallographic homogenized multi-scale F.E. code could be a comprehensive tool to predict the plastic induced texture evolution, anisotropy and formability by the rolling process and the limit dome height test analyses

  19. A multi-scale correlative investigation of ductile fracture

    International Nuclear Information System (INIS)

    Daly, M.; Burnett, T.L.; Pickering, E.J.; Tuck, O.C.G.; Léonard, F.; Kelley, R.; Withers, P.J.; Sherry, A.H.

    2017-01-01

    The use of novel multi-scale correlative methods, which involve the coordinated characterisation of matter across a range of length scales, are becoming of increasing value to materials scientists. Here, we describe for the first time how a multi-scale correlative approach can be used to investigate the nature of ductile fracture in metals. Specimens of a nuclear pressure vessel steel, SA508 Grade 3, are examined following ductile fracture using medium and high-resolution 3D X-ray computed tomography (CT) analyses, and a site-specific analysis using a dual beam plasma focused ion beam scanning electron microscope (PFIB-SEM). The methods are employed sequentially to characterise damage by void nucleation and growth in one volume of interest, allowing for the imaging of voids that ranged in size from less than 100 nm to over 100 μm. This enables the examination of voids initiated at carbide particles to be detected, as well as the large voids initiated at inclusions. We demonstrate that this multi-scale correlative approach is a powerful tool, which not only enhances our understanding of ductile failure through detailed characterisation of microstructure, but also provides quantitative information about the size, volume fractions and spatial distributions of voids that can be used to inform models of failure. It is found that the vast majority of large voids nucleated at MnS inclusions, and that the volume of a void varied according to the volume of its initiating inclusion raised to the power 3/2. The most severe voiding was concentrated within 500 μm of the fracture surface, but measurable damage was found to extend to a depth of at least 3 mm. Microvoids associated with carbides (carbide-initiated voids) were found to be concentrated around larger inclusion-initiated voids at depths of at least 400 μm. Methods for quantifying X-ray CT void data are discussed, and a procedure for using this data to calibrate parameters in the Gurson-Tvergaard Needleman (GTN

  20. Solving network design problems via decomposition, aggregation and approximation

    CERN Document Server

    Bärmann, Andreas

    2016-01-01

    Andreas Bärmann develops novel approaches for the solution of network design problems as they arise in various contexts of applied optimization. At the example of an optimal expansion of the German railway network until 2030, the author derives a tailor-made decomposition technique for multi-period network design problems. Next, he develops a general framework for the solution of network design problems via aggregation of the underlying graph structure. This approach is shown to save much computation time as compared to standard techniques. Finally, the author devises a modelling framework for the approximation of the robust counterpart under ellipsoidal uncertainty, an often-studied case in the literature. Each of these three approaches opens up a fascinating branch of research which promises a better theoretical understanding of the problem and an increasing range of solvable application settings at the same time. Contents Decomposition for Multi-Period Network Design Solving Network Design Problems via Ag...

  1. Multi-scale characterization of surface blistering morphology of helium irradiated W thin films

    International Nuclear Information System (INIS)

    Yang, J.J.; Zhu, H.L.; Wan, Q.; Peng, M.J.; Ran, G.; Tang, J.; Yang, Y.Y.; Liao, J.L.; Liu, N.

    2015-01-01

    Highlights: • Multi-scale blistering morphology of He irradiated W film was studied. • This complex morphology was first characterized by wavelet transform approach. - Abstract: Surface blistering morphologies of W thin films irradiated by 30 keV He ion beam were studied quantitatively. It was found that the blistering morphology strongly depends on He fluence. For lower He fluence, the accumulation and growth of He bubbles induce the intrinsic surface blisters with mono-modal size distribution feature. When the He fluence is higher, the film surface morphology exhibits a multi-scale property, including two kinds of surface blisters with different characteristic sizes. In addition to the intrinsic He blisters, film/substrate interface delamination also induces large-sized surface blisters. A strategy based on wavelet transform approach was proposed to distinguish and extract the multi-scale surface blistering morphologies. Then the density, the lateral size and the height of these different blisters were estimated quantitatively, and the effect of He fluence on these geometrical parameters was investigated. Our method could provide a potential tool to describe the irradiation induced surface damage morphology with a multi-scale property

  2. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  3. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    Science.gov (United States)

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  4. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  5. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  6. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  7. Two-stage simplified swarm optimization for the redundancy allocation problem in a multi-state bridge system

    International Nuclear Information System (INIS)

    Lai, Chyh-Ming; Yeh, Wei-Chang

    2016-01-01

    The redundancy allocation problem involves configuring an optimal system structure with high reliability and low cost, either by alternating the elements with more reliable elements and/or by forming them redundantly. The multi-state bridge system is a special redundancy allocation problem and is commonly used in various engineering systems for load balancing and control. Traditional methods for redundancy allocation problem cannot solve multi-state bridge systems efficiently because it is impossible to transfer and reduce a multi-state bridge system to series and parallel combinations. Hence, a swarm-based approach called two-stage simplified swarm optimization is proposed in this work to effectively and efficiently solve the redundancy allocation problem in a multi-state bridge system. For validating the proposed method, two experiments are implemented. The computational results indicate the advantages of the proposed method in terms of solution quality and computational efficiency. - Highlights: • Propose two-stage SSO (SSO_T_S) to deal with RAP in multi-state bridge system. • Dynamic upper bound enhances the efficiency of searching near-optimal solution. • Vector-update stages reduces the problem dimensions. • Statistical results indicate SSO_T_S is robust both in solution quality and runtime.

  8. Existence of solutions to boundary value problems arising from the fractional advection dispersion equation

    Directory of Open Access Journals (Sweden)

    Lingju Kong

    2013-04-01

    Full Text Available We study the existence of multiple solutions to the boundary value problem $$displaylines{ frac{d}{dt}Big(frac12{}_0D_t^{-eta}(u'(t+frac12{}_tD_T^{-eta}(u'(t Big+lambda abla F(t,u(t=0,quad tin [0,T],cr u(0=u(T=0, }$$ where $T>0$, $lambda>0$ is a parameter, $0leqeta<1$, ${}_0D_t^{-eta}$ and ${}_tD_T^{-eta}$ are, respectively, the left and right Riemann-Liouville fractional integrals of order $eta$, $F: [0,T]imesmathbb{R}^Nomathbb{R}$ is a given function. Our interest in the above system arises from studying the steady fractional advection dispersion equation. By applying variational methods, we obtain sufficient conditions under which the above equation has at least three solutions. Our results are new even for the special case when $eta=0$. Examples are provided to illustrate the applicability of our results.

  9. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  10. Multi-objective genetic algorithm for solving N-version program design problem

    Energy Technology Data Exchange (ETDEWEB)

    Yamachi, Hidemi [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan) and Department of Production and Information Systems Engineering, Tokyo Metropolitan Institute of Technology, Hino, Tokyo 191-0065 (Japan)]. E-mail: yamachi@nit.ac.jp; Tsujimura, Yasuhiro [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan)]. E-mail: tujimr@nit.ac.jp; Kambayashi, Yasushi [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan)]. E-mail: yasushi@nit.ac.jp; Yamamoto, Hisashi [Department of Production and Information Systems Engineering, Tokyo Metropolitan Institute of Technology, Hino, Tokyo 191-0065 (Japan)]. E-mail: yamamoto@cc.tmit.ac.jp

    2006-09-15

    N-version programming (NVP) is a programming approach for constructing fault tolerant software systems. Generally, an optimization model utilized in NVP selects the optimal set of versions for each module to maximize the system reliability and to constrain the total cost to remain within a given budget. In such a model, while the number of versions included in the obtained solution is generally reduced, the budget restriction may be so rigid that it may fail to find the optimal solution. In order to ameliorate this problem, this paper proposes a novel bi-objective optimization model that maximizes the system reliability and minimizes the system total cost for designing N-version software systems. When solving multi-objective optimization problem, it is crucial to find Pareto solutions. It is, however, not easy to obtain them. In this paper, we propose a novel bi-objective optimization model that obtains many Pareto solutions efficiently. We formulate the optimal design problem of NVP as a bi-objective 0-1 nonlinear integer programming problem. In order to overcome this problem, we propose a Multi-objective genetic algorithm (MOGA), which is a powerful, though time-consuming, method to solve multi-objective optimization problems. When implementing genetic algorithm (GA), the use of an appropriate genetic representation scheme is one of the most important issues to obtain good performance. We employ random-key representation in our MOGA to find many Pareto solutions spaced as evenly as possible along the Pareto frontier. To pursue improve further performance, we introduce elitism, the Pareto-insertion and the Pareto-deletion operations based on distance between Pareto solutions in the selection process. The proposed MOGA obtains many Pareto solutions along the Pareto frontier evenly. The user of the MOGA can select the best compromise solution among the candidates by controlling the balance between the system reliability and the total cost.

  11. Multi-objective genetic algorithm for solving N-version program design problem

    International Nuclear Information System (INIS)

    Yamachi, Hidemi; Tsujimura, Yasuhiro; Kambayashi, Yasushi; Yamamoto, Hisashi

    2006-01-01

    N-version programming (NVP) is a programming approach for constructing fault tolerant software systems. Generally, an optimization model utilized in NVP selects the optimal set of versions for each module to maximize the system reliability and to constrain the total cost to remain within a given budget. In such a model, while the number of versions included in the obtained solution is generally reduced, the budget restriction may be so rigid that it may fail to find the optimal solution. In order to ameliorate this problem, this paper proposes a novel bi-objective optimization model that maximizes the system reliability and minimizes the system total cost for designing N-version software systems. When solving multi-objective optimization problem, it is crucial to find Pareto solutions. It is, however, not easy to obtain them. In this paper, we propose a novel bi-objective optimization model that obtains many Pareto solutions efficiently. We formulate the optimal design problem of NVP as a bi-objective 0-1 nonlinear integer programming problem. In order to overcome this problem, we propose a Multi-objective genetic algorithm (MOGA), which is a powerful, though time-consuming, method to solve multi-objective optimization problems. When implementing genetic algorithm (GA), the use of an appropriate genetic representation scheme is one of the most important issues to obtain good performance. We employ random-key representation in our MOGA to find many Pareto solutions spaced as evenly as possible along the Pareto frontier. To pursue improve further performance, we introduce elitism, the Pareto-insertion and the Pareto-deletion operations based on distance between Pareto solutions in the selection process. The proposed MOGA obtains many Pareto solutions along the Pareto frontier evenly. The user of the MOGA can select the best compromise solution among the candidates by controlling the balance between the system reliability and the total cost

  12. QUANTITY DISCOUNTS IN SUPPLIER SELECTION PROBLEM BY USE OF FUZZY MULTI-CRITERIA PROGRAMMING

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2011-02-01

    Full Text Available Supplier selection in supply chain is a multi-criteria problem that involves a number of quantitative and qualitative factors. This paper deals with a concrete problem of flour purchase by a company that manufactures bakery products and the purchasing price of flour depends on the quantity ordered. The criteria for supplier selection and quantities supplied by individual suppliers are: purchase costs, product quality and reliability of suppliers. The problem is solved using a model that combines revised weighting method and fuzzy multi-criteria linear programming (FMCLP. The paper highlights the efficiency of the proposed methodology in conditions when purchasing prices depend on order quantities.

  13. A hybrid metaheuristic algorithm for the multi-depot covering tour vehicle routing problem

    NARCIS (Netherlands)

    Allahyari, S.; Salari, M.; Vigo, D.

    2015-01-01

    We propose a generalization of themulti-depot capacitated vehicle routing problem where the assumption of visiting each customer does not hold. In this problem, called the Multi-Depot Covering Tour Vehicle Routing Problem (MDCTVRP), the demand of each customer could be satisfied in two different

  14. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  15. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

  16. Three formulations of the multi-type capacitated facility location problem

    DEFF Research Database (Denmark)

    Klose, Andreas

    The "multi-type" or "modular" capacitated facility location problem is a discrete location model that addresses non-convex piecewise linear production costs as, for instance, staircase cost functions. The literature basically distinguishes three different ways to formulate non-convex piecewise...

  17. Systematic approximation of multi-scale Feynman integrals arXiv

    CERN Document Server

    Borowka, Sophia; Hulme, Daniel

    An algorithm for the systematic analytical approximation of multi-scale Feynman integrals is presented. The algorithm produces algebraic expressions as functions of the kinematical parameters and mass scales appearing in the Feynman integrals, allowing for fast numerical evaluation. The results are valid in all kinematical regions, both above and below thresholds, up to in principle arbitrary orders in the dimensional regulator. The scope of the algorithm is demonstrated by presenting results for selected two-loop three-point and four-point integrals with an internal mass scale that appear in the two-loop amplitudes for Higgs+jet production.

  18. Multi(scale)gravity: a telescope for the micro-world

    International Nuclear Information System (INIS)

    Kogan, I.I.

    2001-01-01

    A short review of modern status of multi-gravity, i.e. modification of gravity at both short and large distances is given. Usually embedding of standard model and general relativity into any multidimensional construction gives rise to all possible sorts of new effects in a micro-world but we can also get a very drastic modification of these laws of gravity at ultra-large scale. One of the reason why multi-gravity can modify CMB (cosmic microwave background) is that it leads to a large distance modification of the curvature. One of very striking features of multi-gravity is that it gives us a some sort of a dark matter whose origin is that it is just matter from other branes. The author shows that on a 5-dimensional case and at large distances, multi-gravity opens a window in extra dimensions and gravitationally matter which is localized on other branes can be felt. (A.C.)

  19. Multi-scale graphene patterns on arbitrary substrates via laser-assisted transfer-printing process

    KAUST Repository

    Park, J. B.; Yoo, J.-H.; Grigoropoulos, C. P.

    2012-01-01

    A laser-assisted transfer-printing process is developed for multi-scale graphene patterns on arbitrary substrates using femtosecond laser scanning on a graphene/metal substrate and transfer techniques without using multi-step patterning processes

  20. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  1. INFLUENCE OF NANOFILTRATION PRETREATMENT ON SCALE DEPOSITION IN MULTI-STAGE FLASH THERMAL DESALINATION PLANTS

    Directory of Open Access Journals (Sweden)

    Aiman E Al-Rawajfeh

    2011-01-01

    Full Text Available Scale formation represents a major operational problem encountered in thermal desalination plants. In current installed plants, and to allow for a reasonable safety margin, sulfate scale deposition limits the top brine temperature (TBT in multi-stage flash (MSF distillers up to 110-112oC. This has significant effect on the unit capital, operational and water production cost. In this work, the influence of nanofiltration (NF pretreatment on the scale deposition potential and increasing TBT in MSF thermal desalination plants is modeled on the basis of mass transfer with chemical reaction of solutes in the brine. Full and partial NF-pretreatment of the feed water were investigated. TBT can be increased in MSF by increasing the percentage of NF-treated feed. Full NF pretreatment of the make-up allows TBT in the MSF plant to be raised up to 175oC in the case of di hybrid NF-MSF and up to 165oC in the case of tri hybrid NF-RO-MSF. The significant scale reduction is associated with increasing flashing range, unit recovery, unit performance, and will lead to reduction in heat transfer surface area, pumping power and therefore, water production cost.

  2. Intuitionistic Fuzzy Goal Programming Technique for Solving Non-Linear Multi-objective Structural Problem

    Directory of Open Access Journals (Sweden)

    Samir Dey

    2015-07-01

    Full Text Available This paper proposes a new multi-objective intuitionistic fuzzy goal programming approach to solve a multi-objective nonlinear programming problem in context of a structural design. Here we describe some basic properties of intuitionistic fuzzy optimization. We have considered a multi-objective structural optimization problem with several mutually conflicting objectives. The design objective is to minimize weight of the structure and minimize the vertical deflection at loading point of a statistically loaded three-bar planar truss subjected to stress constraints on each of the truss members. This approach is used to solve the above structural optimization model based on arithmetic mean and compare with the solution by intuitionistic fuzzy goal programming approach. A numerical solution is given to illustrate our approach.

  3. Preparation of a large-scale and multi-layer molybdenum crystal and its characteristics

    International Nuclear Information System (INIS)

    Fujii, Tadayuki

    1989-01-01

    In the present work, the secondary recrystallization method was applied to obtain a large-scale and multi-layer crystal from a hot-rolled multi-laminated molybdenum sheet doped and stacked alternately with different amounts of dopant. It was found that the time and/or temperature at which secondary recrystallization commence from the multi- layer sheet is strongly dependent on the amounts of dopants. Therefore the potential nucleus of the secondary grain from layers with different amounts of dopant occurred first at the layer with a small amount of dopant and then grew into the layer with a large amount of dopant after an anneal at 1800 0 C-2000 0 C. Consequently a large -scale and multi-layer molybdenum crystal can easily be obtained. 12 refs., 9 figs., 2 tabs. (Author)

  4. Toward multi-scale simulation of reconnection phenomena in space plasma

    Science.gov (United States)

    Den, M.; Horiuchi, R.; Usami, S.; Tanaka, T.; Ogawa, T.; Ohtani, H.

    2013-12-01

    Magnetic reconnection is considered to play an important role in space phenomena such as substorm in the Earth's magnetosphere. It is well known that magnetic reconnection is controlled by microscopic kinetic mechanism. Frozen-in condition is broken due to particle kinetic effects and collisionless reconnection is triggered when current sheet is compressed as thin as ion kinetic scales under the influence of external driving flow. On the other hand configuration of the magnetic field leading to formation of diffusion region is determined in macroscopic scale and topological change after reconnection is also expressed in macroscopic scale. Thus magnetic reconnection is typical multi-scale phenomenon and microscopic and macroscopic physics are strongly coupled. Recently Horiuchi et al. developed an effective resistivity model based on particle-in-cell (PIC) simulation results obtained in study of collisionless driven reconnection and applied to a global magnetohydrodynamics (MHD) simulation of substorm in the Earth's magnetosphere. They showed reproduction of global behavior in substrom such as dipolarization and flux rope formation by global three dimensional MHD simulation. Usami et al. developed multi-hierarchy simulation model, in which macroscopic and microscopic physics are solved self-consistently and simultaneously. Based on the domain decomposition method, this model consists of three parts: a MHD algorithm for macroscopic global dynamics, a PIC algorithm for microscopic kinetic physics, and an interface algorithm to interlock macro and micro hierarchies. They verified the interface algorithm by simulation of plasma injection flow. In their latest work, this model was applied to collisionless reconnection in an open system and magnetic reconnection was successfully found. In this paper, we describe our approach to clarify multi-scale phenomena and report the current status. Our recent study about extension of the MHD domain to global system is presented. We

  5. The adaptive value of habitat preferences from a multi-scale spatial perspective: insights from marsh-nesting avian species

    Directory of Open Access Journals (Sweden)

    Jan Jedlikowski

    2017-03-01

    Full Text Available Background Habitat selection and its adaptive outcomes are crucial features for animal life-history strategies. Nevertheless, congruence between habitat preferences and breeding success has been rarely demonstrated, which may result from the single-scale evaluation of animal choices. As habitat selection is a complex multi-scale process in many groups of animal species, investigating adaptiveness of habitat selection in a multi-scale framework is crucial. In this study, we explore whether habitat preferences acting at different spatial scales enhance the fitness of bird species, and check the appropriateness of single vs. multi-scale models. We expected that variables found to be more important for habitat selection at individual scale(s, would coherently play a major role in affecting nest survival at the same scale(s. Methods We considered habitat preferences of two Rallidae species, little crake (Zapornia parva and water rail (Rallus aquaticus, at three spatial scales (landscape, territory, and nest-site and related them to nest survival. Single-scale versus multi-scale models (GLS and glmmPQL were compared to check which model better described adaptiveness of habitat preferences. Consistency between the effect of variables on habitat selection and on nest survival was checked to investigate their adaptive value. Results In both species, multi-scale models for nest survival were more supported than single-scale ones. In little crake, the multi-scale model indicated vegetation density and water depth at the territory scale, as well as vegetation height at nest-site scale, as the most important variables. The first two variables were among the most important for nest survival and habitat selection, and the coherent effects suggested the adaptive value of habitat preferences. In water rail, the multi-scale model of nest survival showed vegetation density at territory scale and extent of emergent vegetation within landscape scale as the most

  6. Distributed consensus with visual perception in multi-robot systems

    CERN Document Server

    Montijano, Eduardo

    2015-01-01

    This monograph introduces novel responses to the different problems that arise when multiple robots need to execute a task in cooperation, each robot in the team having a monocular camera as its primary input sensor. Its central proposition is that a consistent perception of the world is crucial for the good development of any multi-robot application. The text focuses on the high-level problem of cooperative perception by a multi-robot system: the idea that, depending on what each robot sees and its current situation, it will need to communicate these things to its fellows whenever possible to share what it has found and keep updated by them in its turn. However, in any realistic scenario, distributed solutions to this problem are not trivial and need to be addressed from as many angles as possible. Distributed Consensus with Visual Perception in Multi-Robot Systems covers a variety of related topics such as: ·         distributed consensus algorithms; ·         data association and robustne...

  7. A heuristic algorithm for a multi-product four-layer capacitated location-routing problem

    Directory of Open Access Journals (Sweden)

    Mohsen Hamidi

    2014-01-01

    Full Text Available The purpose of this study is to solve a complex multi-product four-layer capacitated location-routing problem (LRP in which two specific constraints are taken into account: 1 plants have limited production capacity, and 2 central depots have limited capacity for storing and transshipping products. The LRP represents a multi-product four-layer distribution network that consists of plants, central depots, regional depots, and customers. A heuristic algorithm is developed to solve the four-layer LRP. The heuristic uses GRASP (Greedy Randomized Adaptive Search Procedure and two probabilistic tabu search strategies of intensification and diversification to tackle the problem. Results show that the heuristic solves the problem effectively.

  8. A practical scale for Multi-Faceted Organizational Health Climate Assessment.

    Science.gov (United States)

    Zweber, Zandra M; Henning, Robert A; Magley, Vicki J

    2016-04-01

    The current study sought to develop a practical scale to measure 3 facets of workplace health climate from the employee perspective as an important component of a healthy organization. The goal was to create a short, usable yet comprehensive scale that organizations and occupational health professionals could use to determine if workplace health interventions were needed. The proposed Multi-faceted Organizational Health Climate Assessment (MOHCA) scale assesses facets that correspond to 3 organizational levels: (a) workgroup, (b) supervisor, and (c) organization. Ten items were developed and tested on 2 distinct samples, 1 cross-organization and 1 within-organization. Exploratory and confirmatory factor analyses yielded a 9-item, hierarchical 3-factor structure. Tests confirmed MOHCA has convergent validity with related constructs, such as perceived organizational support and supervisor support, as well as discriminant validity with safety climate. Lastly, criterion-related validity was found between MOHCA and health-related outcomes. The multi-faceted nature of MOHCA provides a scale that has face validity and can be easily translated into practice, offering a means for diagnosing the shortcomings of an organization or workgroup's health climate to better plan health and well-being interventions. (c) 2016 APA, all rights reserved).

  9. Fenomena Kerak Dalam Desalinasi Dengan Multi Stage Flash Distillation (Msf)

    OpenAIRE

    Alimah, Siti

    2006-01-01

    SCALING PHENOMENA IN DESALINATION WITH MULTI STAGE FLASH DISTILLATION (MSF). Assessment of scaling phenomena in MSF desalination has been carried out. Scale is one of predominantly problem in multi stage flash (MSF) desalination installation. The main types of scale in MSF are carbonat calcium (CaC03), hydroxide magnesium (Mg(OH)2) dan sulphate calcium (CaS04). CaC03 dan Mg(OH)2 scales result from the thermal decomposition of bicarbonate ion, however sulphate calcium scale result from reactio...

  10. 3D deblending of simultaneous source data based on 3D multi-scale shaping operator

    Science.gov (United States)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Gong, Fei; Huang, Weilin

    2018-04-01

    We propose an iterative three-dimensional (3D) deblending scheme using 3D multi-scale shaping operator to separate 3D simultaneous source data. The proposed scheme is based on the property that signal is coherent, whereas interference is incoherent in some domains, e.g., common receiver domain and common midpoint domain. In two-dimensional (2D) blended record, the coherency difference of signal and interference is in only one spatial direction. Compared with 2D deblending, the 3D deblending can take more sparse constraints into consideration to obtain better performance, e.g., in 3D common receiver gather, the coherency difference is in two spatial directions. Furthermore, with different levels of coherency, signal and interference distribute in different scale curvelet domains. In both 2D and 3D blended records, most coherent signal locates in coarse scale curvelet domain, while most incoherent interference distributes in fine scale curvelet domain. The scale difference is larger in 3D deblending, thus, we apply the multi-scale shaping scheme to further improve the 3D deblending performance. We evaluate the performance of 3D and 2D deblending with the multi-scale and global shaping operators, respectively. One synthetic and one field data examples demonstrate the advantage of the 3D deblending with 3D multi-scale shaping operator.

  11. Comparative Study of Evolutionary Multi-objective Optimization Algorithms for a Non-linear Greenhouse Climate Control Problem

    DEFF Research Database (Denmark)

    Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2015-01-01

    Non-trivial real world decision-making processes usually involve multiple parties having potentially conflicting interests over a set of issues. State-of-the-art multi-objective evolutionary algorithms (MOEA) are well known to solve this class of complex real-world problems. In this paper, we...... compare the performance of state-of-the-art multi-objective evolutionary algorithms to solve a non-linear multi-objective multi-issue optimisation problem found in Greenhouse climate control. The chosen algorithms in the study includes NSGAII, eNSGAII, eMOEA, PAES, PESAII and SPEAII. The performance...... of all aforementioned algorithms is assessed and compared using performance indicators to evaluate proximity, diversity and consistency. Our insights to this comparative study enhanced our understanding of MOEAs performance in order to solve a non-linear complex climate control problem. The empirical...

  12. Nonsolvent-assisted fabrication of multi-scaled polylactide as superhydrophobic surfaces.

    Science.gov (United States)

    Chang, Yafang; Liu, Xuying; Yang, Huige; Zhang, Li; Cui, Zhe; Niu, Mingjun; Liu, Hongzhi; Chen, Jinzhou

    2016-03-14

    The solution-processing fabrication of superhydrophobic surfaces is currently intriguing, owing to high-efficiency, low cost, and energy-consuming. Here, a facile nonsolvent-assisted process was proposed for the fabrication of the multi-scaled surface roughness in polylactide (PLA) films, thereby resulting in a significant transformation in the surface wettability from intrinsic hydrophilicity to superhydrophobicity. Moreover, it was found that the surface topographical structure of PLA films can be manipulated by varying the compositions of the PLA solutions. And the samples showed superhydrophobic surfaces as well as high melting enthalpy and crystallinity. In particular, a high contact angle of 155.8° together with a high adhesive force of 184 μN was yielded with the assistance of a multi-nonsolvent system, which contributed to the co-existence of micro-/nano-scale hierarchical structures.

  13. Multi-period multi-objective electricity generation expansion planning problem with Monte-Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tekiner, Hatice [Industrial Engineering, College of Engineering and Natural Sciences, Istanbul Sehir University, 2 Ahmet Bayman Rd, Istanbul (Turkey); Coit, David W. [Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Rd., Piscataway, NJ (United States); Felder, Frank A. [Edward J. Bloustein School of Planning and Public Policy, Rutgers University, Piscataway, NJ (United States)

    2010-12-15

    A new approach to the electricity generation expansion problem is proposed to minimize simultaneously multiple objectives, such as cost and air emissions, including CO{sub 2} and NO{sub x}, over a long term planning horizon. In this problem, system expansion decisions are made to select the type of power generation, such as coal, nuclear, wind, etc., where the new generation asset should be located, and at which time period expansion should take place. We are able to find a Pareto front for the multi-objective generation expansion planning problem that explicitly considers availability of the system components over the planning horizon and operational dispatching decisions. Monte-Carlo simulation is used to generate numerous scenarios based on the component availabilities and anticipated demand for energy. The problem is then formulated as a mixed integer linear program, and optimal solutions are found based on the simulated scenarios with a combined objective function considering the multiple problem objectives. The different objectives are combined using dimensionless weights and a Pareto front can be determined by varying these weights. The mathematical model is demonstrated on an example problem with interesting results indicating how expansion decisions vary depending on whether minimizing cost or minimizing greenhouse gas emissions or pollutants is given higher priority. (author)

  14. Multi-scales modeling of reactive transport mechanisms. Impact on petrophysical properties during CO2 storage

    International Nuclear Information System (INIS)

    Varloteaux, C.

    2012-01-01

    The geo-sequestration of carbon dioxide (CO 2 ) is an attractive option to reduce the emission of greenhouse gases. Within carbonate reservoirs, acidification of brine in place can occur during CO 2 injection. This acidification leads to mineral dissolution which can modify the transport properties of a solute in porous media. The aim of this study is to quantify the impact of reactive transport on a solute distribution and on the structural modification induced by the reaction from the pore to the reservoir scale. This study is focused on reactive transport problem in the case of single phase flow in the limit of long time. To do so, we used a multi-scale up-scaling method that takes into account (i) the local scale, where flow, reaction and transport are known; (ii) the pore scale, where the reactive transport is addressed by using averaged formulation of the local equations; (iii) the Darcy scale (also called core scale), where the structure of the rock is taken into account by using a three-dimensions network of pore-bodies connected by pore-throats; and (iv) the reservoir scale, where physical phenomenon, within each cell of the reservoir model, are taken into account by introducing macroscopic coefficients deduced from the study of these phenomenon at the Darcy scale, such as the permeability, the apparent reaction rate, the solute apparent velocity and dispersion. (author)

  15. Multi-focal lobular carcinoma in situ arising in benign phylodes tumor: A case report

    International Nuclear Information System (INIS)

    Lee, Taeg Ki; Choi, Chang Hwan; Kim, Youn Jeong; Kim, Mi Young; Lee, Kyung Hee; Cho, Soon Gu

    2015-01-01

    Coexistent breast malignancy arising in phyllodes tumor is extremely rare, and most of them are incidental reports after surgical excision. Coexistent malignancy in phyllodes tumor can vary from in-situ to invasive carcinoma. Lobular neoplasia is separated into atypical lobular hyperplasia and lobular carcinoma in situ (LCIS). LCIS is known to have a higher risk of developing invasive cancer. We reported imaging findings of multifocal LCIS arising in benign phyllodes tumor

  16. Multi-focal lobular carcinoma in situ arising in benign phylodes tumor: A case report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taeg Ki; Choi, Chang Hwan; Kim, Youn Jeong; Kim, Mi Young; Lee, Kyung Hee; Cho, Soon Gu [Inha University Hospital, Incheon (Korea, Republic of)

    2015-08-15

    Coexistent breast malignancy arising in phyllodes tumor is extremely rare, and most of them are incidental reports after surgical excision. Coexistent malignancy in phyllodes tumor can vary from in-situ to invasive carcinoma. Lobular neoplasia is separated into atypical lobular hyperplasia and lobular carcinoma in situ (LCIS). LCIS is known to have a higher risk of developing invasive cancer. We reported imaging findings of multifocal LCIS arising in benign phyllodes tumor.

  17. Multi-party arbitration in international trade: problems and solutions

    DEFF Research Database (Denmark)

    Siig, Kristina

    2007-01-01

    Legal disputes regarding international trade frequently involve more than two parties. This leads to problems, as the preferred means of dispute resolution within international trade - arbitration - tends to be ill-equipped to handle such disputes. The topic of the paper is arbitration as a means...... of dispute resolution in a multy-party set-up. Both the possible legal bases and the problems encountered are considere. It is concluded that arbitration is still the only real option to the parties in international business disputes and that many of the shortcomings  may be contered by skilful drafting...

  18. Do experiments suggest a hierarchy problem?

    International Nuclear Information System (INIS)

    Vissani, F.

    1997-09-01

    The hierarchy problem of the scalar sector of the standard model is reformulated, emphasizing the role of experimental facts that may suggest the existence of a new physics large mass scale, for instance indications of the instability of the matter, or indications in favor of massive neutrinos. In the see-saw model for the neutrino masses a hierarchy problem arises if the mass of the right-handed neutrinos is larger than approximatively 10 7 GeV: this problem, and its possible solutions, are discussed. (author)

  19. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  20. Defect detection in industrial radiography: a multi-scale approach; Detection de defauts en radiographie industrielle: approches multiechelles

    Energy Technology Data Exchange (ETDEWEB)

    Lefevre, M

    1995-10-01

    Radiography is used by Electricite de France for pipe inspection in nuclear power plant in order to detect defects. For several years, the RD Division of EDF has undertaken research to define image processing methods well adapted to radiographic images. The main issues raised by these images are their low contrast, their high level of noise, the presence of a trend and the variable size of the defects. A data base of digitized radiographs of pipes has been gathered and the statistical, topological and geometrical properties of all of these images have been analyzed. From this study, a global indicator of the presence of defects and local features, leading to a classification of images into areas with or without defects, have been extracted. The defect localisation problem has been considered in a multi-scale framework based on the creation of a family of images with increasing regularity and defined as a solution of a partial differential equation. From a choice of axioms, a set of equations may be deduced which define various multi-scale analyses. The survey of the properties of such analysed, when applied to images altered with different types of noise, has lead to the selection of the digitized radiographs best adapted multi-scale analysis. The segmentation process, uses the geodesic information attached to defects via connection cost concept. The final decision is based on a summary of the information extracted at several scales. A fuzzy logic approach has been proposed to solve this part. We then developed methods and tools for expertise guidance and validated them on a complete data base of images. Some global indicators have been extracted and a detection and localisation process has been achieved for large defects. (author). 117 refs., 73 figs.

  1. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  2. Multi-grid Particle-in-cell Simulations of Plasma Microturbulence

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A new scheme to accurately retain kinetic electron effects in particle-in-cell (PIC) simulations for the case of electrostatic drift waves is presented. The splitting scheme, which is based on exact separation between adiabatic and on adiabatic electron responses, is shown to yield more accurate linear growth rates than the standard df scheme. The linear and nonlinear elliptic problems that arise in the splitting scheme are solved using a multi-grid solver. The multi-grid particle-in-cell approach offers an attractive path, both from the physics and numerical points of view, to simulate kinetic electron dynamics in global toroidal plasmas

  3. Multi-scale multi-physics computational chemistry simulation based on ultra-accelerated quantum chemical molecular dynamics method for structural materials in boiling water reactor

    International Nuclear Information System (INIS)

    Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu

    2014-01-01

    In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for

  4. Mathematical modelling and numerical resolution of multi-phase compressible fluid flows problems

    International Nuclear Information System (INIS)

    Lagoutiere, Frederic

    2000-01-01

    This work deals with Eulerian compressible multi-species fluid dynamics, the species being either mixed or separated (with interfaces). The document is composed of three parts. The first parts devoted to the numerical resolution of model problems: advection equation, Burgers equation, and Euler equations, in dimensions one and two. The goal is to find a precise method, especially for discontinuous initial conditions, and we develop non dissipative algorithms. They are based on a downwind finite-volume discretization under some stability constraints. The second part treats of the mathematical modelling of fluids mixtures. We construct and analyse a set of multi-temperature and multi-pressure models that are entropy, symmetrizable, hyperbolic, not ever conservative. In the third part, we apply the ideas developed in the first part (downwind discretization) to the numerical resolution of the partial differential problems we have constructed for fluids mixtures in the second part. We present some numerical results in dimensions one and two. (author) [fr

  5. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    Science.gov (United States)

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  6. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  7. Scrubbing Up: Multi-Scale Investigation of Woody Encroachment in a Southern African Savannah

    Directory of Open Access Journals (Sweden)

    Christopher G. Marston

    2017-04-01

    Full Text Available Changes in the extent of woody vegetation represent a major conservation question in many savannah systems around the globe. To address the problem of the current lack of broad-scale cost-effective tools for land cover monitoring in complex savannah environments, we use a multi-scale approach to quantifying vegetation change in Kruger National Park (KNP, South Africa. We test whether medium spatial resolution satellite data (Landsat, existing back to the 1970s, which have pixel sizes larger than typical vegetation patches, can nevertheless capture the thematic detail required to detect woody encroachment in savannahs. We quantify vegetation change over a 13-year period in KNP, examine the changes that have occurred, assess the drivers of these changes, and compare appropriate remote sensing data sources for monitoring change. We generate land cover maps for three areas of southern KNP using very high resolution (VHR and medium resolution satellite sensor imagery from February 2001 to 2014. Considerable land cover change has occurred, with large increases in shrubs replacing both trees and grassland. Examination of exclosure areas and potential environmental driver data suggests two mechanisms: elephant herbivory removing trees and at least one separate mechanism responsible for conversion of grassland to shrubs, theorised to be increasing atmospheric CO2. Thus, the combination of these mechanisms causes the novel two-directional shrub encroachment that we observe (tree loss and grassland conversion. Multi-scale comparison of classifications indicates that although spatial detail is lost when using medium resolution rather than VHR imagery for land cover classification (e.g., Landsat imagery cannot readily distinguish between tree and shrub classes, while VHR imagery can, the thematic detail contained within both VHR and medium resolution classifications is remarkably congruent. This suggests that medium resolution imagery contains sufficient

  8. Technology of solving multi-objective problems of control of systems with distributed parameters

    Science.gov (United States)

    Rapoport, E. Ya.; Pleshivtseva, Yu. E.

    2017-07-01

    A constructive technology of multi-objective optimization of control of distributed parameter plants is proposed. The technology is based on a single-criterion version in the form of the minimax convolution of normalized performance criteria. The approach under development is based on the transition to an equivalent form of the variational problem with constraints, with the problem solution being a priori Pareto-effective. Further procedures of preliminary parameterization of control actions and subsequent reduction to a special problem of semi-infinite programming make it possible to find the sought extremals with the use of their Chebyshev properties and fundamental laws of the subject domain. An example of multi-objective optimization of operation modes of an engineering thermophysics object is presented, which is of independent interest.

  9. Information and Intertemporal Choices in Multi-Agent Decision Problems

    OpenAIRE

    Mariagrazia Olivieri; Massimo Squillante; Viviana Ventre

    2016-01-01

    Psychological evidences of impulsivity and false consensus effect lead results far from rationality. It is shown that impulsivitymodifies the discount function of each individual, and false consensus effect increases the degree of consensus in a multi-agent decision problem. Analyzing them together we note that in strategic interactions these two human factors involve choices which change equilibriums expected by rational individuals.

  10. Application of the multi-objective cross-entropy method to the vehicle routing problem with soft time windows

    Directory of Open Access Journals (Sweden)

    C Hauman

    2014-06-01

    Full Text Available The vehicle routing problem with time windows is a widely studied problem with many real-world applications. The problem considered here entails the construction of routes that a number of identical vehicles travel to service different nodes within a certain time window. New benchmark problems with multi-objective features were recently suggested in the literature and the multi-objective optimisation cross-entropy method is applied to these problems to investigate the feasibility of the method and to determine and propose reference solutions for the benchmark problems. The application of the cross-entropy method to the multi-objective vehicle routing problem with soft time windows is investigated. The objectives that are evaluated include the minimisation of the total distance travelled, the number of vehicles and/or routes, the total waiting time and delay time of the vehicles and the makespan of a route.

  11. A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird

    Directory of Open Access Journals (Sweden)

    YU Tong

    2016-12-01

    Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.

  12. A Framework for Parallel Numerical Simulations on Multi-Scale Geometries

    KAUST Repository

    Varduhn, Vasco

    2012-06-01

    In this paper, an approach on performing numerical multi-scale simulations on fine detailed geometries is presented. In particular, the focus lies on the generation of sufficient fine mesh representations, whereas a resolution of dozens of millions of voxels is inevitable in order to sufficiently represent the geometry. Furthermore, the propagation of boundary conditions is investigated by using simulation results on the coarser simulation scale as input boundary conditions on the next finer scale. Finally, the applicability of our approach is shown on a two-phase simulation for flooding scenarios in urban structures running from a city wide scale to a fine detailed in-door scale on feature rich building geometries. © 2012 IEEE.

  13. Multi-scale carbon micro/nanofibers-based adsorbents for protein immobilization

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Shiv; Singh, Abhinav [Department of Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Bais, Vaibhav Sushil Singh; Prakash, Balaji [Department of Biological Science and Bioengineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Verma, Nishith, E-mail: nishith@iitk.ac.in [Department of Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Center for Environmental Science and Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India)

    2014-05-01

    In the present study, different proteins, namely, bovine serum albumin (BSA), glucose oxidase (GOx) and the laboratory purified YqeH were immobilized in the phenolic resin precursor-based multi-scale web of activated carbon microfibers (ACFs) and carbon nanofibers (CNFs). These biomolecules are characteristically different from each other, having different structure, number of parent amino acid molecules and isoelectric point. CNF was grown on ACF substrate by chemical vapor deposition, using Ni nanoparticles (Nps) as the catalyst. The ultra-sonication of the CNFs was carried out in acidic medium to remove Ni Nps from the tip of the CNFs to provide additional active sites for adsorption. The prepared material was directly used as an adsorbent for proteins, without requiring any additional treatment. Several analytical techniques were used to characterize the prepared materials, including scanning electron microscopy, Fourier transform infrared spectroscopy, BET surface area, pore-size distribution, and UV–vis spectroscopy. The adsorption capacities of prepared ACFs/CNFs in this study were determined to be approximately 191, 39 and 70 mg/g for BSA, GOx and YqeH, respectively, revealing that the carbon micro-nanofibers forming synthesized multi-scale web are efficient materials for the immobilization of protein molecules. - Highlights: • Ni metal Np-dispersed carbon micro-nanofibers (ACFs/CNFs) are prepared. • ACFs/CNFs are mesoporous. • Significant adsorption of BSA, GOx and YqeH is observed on ACFs/CNFs. • Multi-scale web of ACFs/CNFs is effective for protein immobilization.

  14. Risk-aware multi-armed bandit problem with application to portfolio selection.

    Science.gov (United States)

    Huo, Xiaoguang; Fu, Feng

    2017-11-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.

  15. On the problems of PPS sampling in multi-character surveys ...

    African Journals Online (AJOL)

    This paper, which is on the problems of PPS sampling in multi-character surveys, compares the efficiency of some estimators used in PPSWR sampling for multiple characteristics. From a superpopulation model, we computed the expected variances of the different estimators for each of the first two finite populations ...

  16. Consensus of Multi-Agent Systems with Prestissimo Scale-Free Networks

    International Nuclear Information System (INIS)

    Yang Hongyong; Lu Lan; Cao Kecai; Zhang Siying

    2010-01-01

    In this paper, the relations of the network topology and the moving consensus of multi-agent systems are studied. A consensus-prestissimo scale-free network model with the static preferential-consensus attachment is presented on the rewired link of the regular network. The effects of the static preferential-consensus BA network on the algebraic connectivity of the topology graph are compared with the regular network. The robustness gain to delay is analyzed for variable network topology with the same scale. The time to reach the consensus is studied for the dynamic network with and without communication delays. By applying the computer simulations, it is validated that the speed of the convergence of multi-agent systems can be greatly improved in the preferential-consensus BA network model with different configuration. (interdisciplinary physics and related areas of science and technology)

  17. Discussion of several problems in nuclear instrument scale

    International Nuclear Information System (INIS)

    Li Xuezhen; Zhou Sichun; Xiao Caijin

    2005-01-01

    The equipment scale is the first problem in measurement, including nuclear apparatus, otherwise there are different methods of equipment scale, then how to get the best way to seek the scale equation is the keystone of study. The article discusses several methods to get scale equation from the angle of error transformation, and compares their superiority, then gets the most precision method--Deming method, in addition, there is another simple and applied method, that is method of the mean, in the end, validates the theory through X fluorescence equipment scale. (authors)

  18. Accelerating solving the dynamic multi-objective nework design problem using response surface methods

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel C.J.; Viti, F.; Immers, B.; Tampere, C.

    2011-01-01

    Multi objective optimization of externalities of traffic solving a network design problem in which Dynamic Traffic Management measures are used, is time consuming while heuristics are needed and solving the lower level requires solving the dynamic user equilibrium problem. Use of response surface

  19. Upper estimates of complexity of algorithms for multi-peg Tower of Hanoi problem

    Directory of Open Access Journals (Sweden)

    Sergey Novikov

    2007-06-01

    Full Text Available There are proved upper explicit estimates of complexity of lgorithms: for multi-peg Tower of Hanoi problem with the limited number of disks, for Reve's puzzle and for $5$-peg Tower of Hanoi problem with the free number of disks.

  20. Chondrocyte deformations as a function of tibiofemoral joint loading predicted by a generalized high-throughput pipeline of multi-scale simulations.

    Directory of Open Access Journals (Sweden)

    Scott C Sibole

    and/or micro-scale model providing application for other multi-scale continuum mechanics problems.

  1. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    Science.gov (United States)

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    model providing application for other multi-scale continuum mechanics problems. PMID:22649535

  2. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    OpenAIRE

    Xiao-Li Ding; Juan J. Nieto

    2018-01-01

    In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochast...

  3. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  4. Multi-scale modelling and numerical simulation of electronic kinetic transport

    International Nuclear Information System (INIS)

    Duclous, R.

    2009-11-01

    This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms

  5. Efficient solution of a multi objective fuzzy transportation problem

    Science.gov (United States)

    Vidhya, V.; Ganesan, K.

    2018-04-01

    In this paper we present a methodology for the solution of multi-objective fuzzy transportation problem when all the cost and time coefficients are trapezoidal fuzzy numbers and the supply and demand are crisp numbers. Using a new fuzzy arithmetic on parametric form of trapezoidal fuzzy numbers and a new ranking method all efficient solutions are obtained. The proposed method is illustrated with an example.

  6. Solving dynamic multi-objective problems with vector evaluated particle swarm optimisation

    CSIR Research Space (South Africa)

    Greeff, M

    2008-06-01

    Full Text Available Many optimisation problems are multi-objective and change dynamically. Many methods use a weighted average approach to the multiple objectives. This paper introduces the usage of the vector evaluated particle swarm optimiser (VEPSO) to solve dynamic...

  7. A multi-scale network method for two-phase flow in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-08-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  8. A multi-scale network method for two-phase flow in porous media

    International Nuclear Information System (INIS)

    Khayrat, Karim; Jenny, Patrick

    2017-01-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  9. Multi-scale path planning for reduced environmental impact of aviation

    Science.gov (United States)

    Campbell, Scot Edward

    A future air traffic management system capable of rerouting aircraft trajectories in real-time in response to transient and evolving events would result in increased aircraft efficiency, better utilization of the airspace, and decreased environmental impact. Mixed-integer linear programming (MILP) is used within a receding horizon framework to form aircraft trajectories which mitigate persistent contrail formation, avoid areas of convective weather, and seek a minimum fuel solution. Areas conducive to persistent contrail formation and areas of convective weather occur at disparate temporal and spatial scales, and thereby require the receding horizon controller to be adaptable to multi-scale events. In response, a novel adaptable receding horizon controller was developed to account for multi-scale disturbances, as well as generate trajectories using both a penalty function approach for obstacle penetration and hard obstacle avoidance constraints. A realistic aircraft fuel burn model based on aircraft data and engine performance simulations is used to form the cost function in the MILP optimization. The performance of the receding horizon algorithm is tested through simulation. A scalability analysis of the algorithm is conducted to ensure the tractability of the path planner. The adaptable receding horizon algorithm is shown to successfully negotiate multi-scale environments with performance exceeding static receding horizon solutions. The path planner is applied to realistic scenarios involving real atmospheric data. A single flight example for persistent contrail mitigation shows that fuel burn increases 1.48% when approximately 50% of persistent contrails are avoided, but 6.19% when 100% of persistent contrails are avoided. Persistent contrail mitigating trajectories are generated for multiple days of data, and the research shows that 58% of persistent contrails are avoided with a 0.48% increase in fuel consumption when averaged over a year.

  10. A multi-objective multi-memetic algorithm for network-wide conflict-free 4D flight trajectories planning

    Institute of Scientific and Technical Information of China (English)

    Su YAN; Kaiquan CAI

    2017-01-01

    Under the demand of strategic air traffic flow management and the concept of trajectory based operations (TBO),the network-wide 4D flight trajectories planning (N4DFTP) problem has been investigated with the purpose of safely and efficiently allocating 4D trajectories (4DTs) (3D position and time) for all the flights in the whole airway network.Considering that the introduction of large-scale 4DTs inevitably increases the problem complexity,an efficient model for strategic level conflict management is developed in this paper.Specifically,a bi-objective N4DFTP problem that aims to minimize both potential conflicts and the trajectory cost is formulated.In consideration of the large-scale,high-complexity,and multi-objective characteristics of the N4DFTP problem,a multi-objective multi-memetic algorithm (MOMMA) that incorporates an evolutionary global search framework together with three problem-specific local search operators is implemented.It is capable of rapidly and effectively allocating 4DTs via rerouting,target time controlling,and flight level changing.Additionally,to balance the ability of exploitation and exploration of the algorithm,a special hybridization scheme is adopted for the integration of local and global search.Empirical studies using real air traffic data in China with different network complexities show that the pro posed MOMMA is effective to solve the N4DFTP problem.The solutions achieved are competitive for elaborate decision support under a TBO environment.

  11. A multi-objective multi-memetic algorithm for network-wide conflict-free 4D flight trajectories planning

    Directory of Open Access Journals (Sweden)

    Su YAN

    2017-06-01

    Full Text Available Under the demand of strategic air traffic flow management and the concept of trajectory based operations (TBO, the network-wide 4D flight trajectories planning (N4DFTP problem has been investigated with the purpose of safely and efficiently allocating 4D trajectories (4DTs (3D position and time for all the flights in the whole airway network. Considering that the introduction of large-scale 4DTs inevitably increases the problem complexity, an efficient model for strategic-level conflict management is developed in this paper. Specifically, a bi-objective N4DFTP problem that aims to minimize both potential conflicts and the trajectory cost is formulated. In consideration of the large-scale, high-complexity, and multi-objective characteristics of the N4DFTP problem, a multi-objective multi-memetic algorithm (MOMMA that incorporates an evolutionary global search framework together with three problem-specific local search operators is implemented. It is capable of rapidly and effectively allocating 4DTs via rerouting, target time controlling, and flight level changing. Additionally, to balance the ability of exploitation and exploration of the algorithm, a special hybridization scheme is adopted for the integration of local and global search. Empirical studies using real air traffic data in China with different network complexities show that the proposed MOMMA is effective to solve the N4DFTP problem. The solutions achieved are competitive for elaborate decision support under a TBO environment.

  12. Multi-scale spatial modeling of human exposure from local sources to global intake

    DEFF Research Database (Denmark)

    Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier

    2018-01-01

    Exposure studies, used in human health risk and impact assessments of chemicals are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea......, an innovative multi-scale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties...... occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ~10,000 emission locations covering France more densely to determine per chemical and exposure route...

  13. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  14. Nonlinear dynamics of the complex multi-scale network

    Science.gov (United States)

    Makarov, Vladimir V.; Kirsanov, Daniil; Goremyko, Mikhail; Andreev, Andrey; Hramov, Alexander E.

    2018-04-01

    In this paper, we study the complex multi-scale network of nonlocally coupled oscillators for the appearance of chimera states. Chimera is a special state in which, in addition to the asynchronous cluster, there are also completely synchronous parts in the system. We show that the increase of nodes in subgroups leads to the destruction of the synchronous interaction within the common ring and to the narrowing of the chimera region.

  15. The benefits of global scaling in multi-criteria decision analysis

    Directory of Open Access Journals (Sweden)

    Jamie P. Monat

    2009-10-01

    Full Text Available When there are multiple competing objectives in a decision-making process, Multi-Attribute Choice scoring models are excellent tools, permitting the incorporation of both subjective and objective attributes. However, their accuracy depends upon the subjective techniques used to construct the attribute scales and their concomitant weights. Conventional techniques using local scales tend to overemphasize small differences in attribute measures, which may yield erroneous conclusions. The Range Sensitivity Principle (RSP is often invoked to adjust attribute weights when local scales are used. In practice, however, decision makers often do not follow the prescriptions of the Range Sensitivity Principle and under-adjust the weights, resulting in potentially poor decisions. Examples are discussed as is a proposed solution: the use of global scales instead of local scales.

  16. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  17. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    Directory of Open Access Journals (Sweden)

    Changsheng Zhu

    2018-03-01

    Full Text Available In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  18. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    Science.gov (United States)

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  19. The multi-objective decision making methods based on MULTIMOORA and MOOSRA for the laptop selection problem

    Science.gov (United States)

    Aytaç Adalı, Esra; Tuş Işık, Ayşegül

    2017-06-01

    A decision making process requires the values of conflicting objectives for alternatives and the selection of the best alternative according to the needs of decision makers. Multi-objective optimization methods may provide solution for this selection. In this paper it is aimed to present the laptop selection problem based on MOORA plus full multiplicative form (MULTIMOORA) and multi-objective optimization on the basis of simple ratio analysis (MOOSRA) which are relatively new multi-objective optimization methods. The novelty of this paper is solving this problem with the MULTIMOORA and MOOSRA methods for the first time.

  20. Multi-scale window specification over streaming trajectories

    Directory of Open Access Journals (Sweden)

    Kostas Patroumpas

    2013-12-01

    Full Text Available Enormous amounts of positional information are collected by monitoring applications in domains such as fleet management, cargo transport, wildlife protection, etc. With the advent of modern location-based services, processing such data mostly focuses on providing real-time response to a variety of user requests in continuous and scalable fashion. An important class of such queries concerns evolving trajectories that continuously trace the streaming locations of moving objects, like GPS-equipped vehicles, commodities with RFID's, people with smartphones etc. In this work, we propose an advanced windowing operator that enables online, incremental examination of recent motion paths at multiple resolutions for numerous point entities. When applied against incoming positions, this window can abstract trajectories at coarser representations towards the past, while retaining progressively finer features closer to the present. We explain the semantics of such multi-scale sliding windows through parameterized functions that reflect the sequential nature of trajectories and can effectively capture their spatiotemporal properties. Such window specification goes beyond its usual role for non-blocking processing of multiple concurrent queries. Actually, it can offer concrete subsequences from each trajectory, thus preserving continuity in time and contiguity in space along the respective segments. Further, we suggest language extensions in order to express characteristic spatiotemporal queries using windows. Finally, we discuss algorithms for nested maintenance of multi-scale windows and evaluate their efficiency against streaming positional data, offering empirical evidence of their benefits to online trajectory processing.

  1. Modeling Macroscopic Shape Distortions during Sintering of Multi-layers

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye

    as to help achieve defect free multi-layer components. The initial thickness ratio between the layers making the multi-layer has also significant effect on the extent of camber evolution depending on the material systems. During sintering of tubular bi-layer structures, tangential (hoop) stresses are very...... large compared to radial stresses. The maximum value of hoop stress, which can generate processing defects such as cracks and coating peel-offs, occurs at the beginning of the sintering cycle. Unlike most of the models defining material properties based on porosity and grain size only, the multi...... (firing). However, unintended features like shape instabilities of samples, cracks or delamination of layers may arise during sintering of multi-layer composites. Among these defects, macroscopic shape distortions in the samples can cause problems in the assembly or performance of the final component...

  2. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks

    Directory of Open Access Journals (Sweden)

    Nandakumaran Nadarajah

    2018-04-01

    Full Text Available Precise point positioning (PPP and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic, can benefit enormously from the integration of multiple global navigation satellite systems (GNSS. In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.

  3. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  4. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  5. A Bayesian solution to multi-target tracking problems with mixed labelling

    NARCIS (Netherlands)

    Aoki, E.H.; Boers, Y.; Svensson, Lennart; Mandal, Pranab K.; Bagchi, Arunabha

    In Multi-Target Tracking (MTT), the problem of assigning labels to tracks (track labelling) is vastly covered in literature and has been previously formulated using Bayesian recursion. However, the existing literature lacks an appropriate measure of uncertainty related to the assigned labels which

  6. Barriers and Facilitators for Health Behavior Change among Adults from Multi-Problem Households: A Qualitative Study

    NARCIS (Netherlands)

    Nagelhout, Gera; Hogeling, Lette; Spruijt, Renate; Postma, Nathalie; Vries, de Hein

    2017-01-01

    Multi-problem households are households with problems on more than one of the following core problem areas: socio-economic problems, psycho-social problems, and problems related to child care. The aim of this study was to examine barriers and facilitators for health behavior change among adults from

  7. Multi-Domain Modeling Based on Modelica

    Directory of Open Access Journals (Sweden)

    Liu Jun

    2016-01-01

    Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.

  8. Generalist solutions to complex problems: generating practice-based evidence--the example of managing multi-morbidity.

    Science.gov (United States)

    Reeve, Joanne; Blakeman, Tom; Freeman, George K; Green, Larry A; James, Paul A; Lucassen, Peter; Martin, Carmel M; Sturmberg, Joachim P; van Weel, Chris

    2013-08-07

    A growing proportion of people are living with long term conditions. The majority have more than one. Dealing with multi-morbidity is a complex problem for health systems: for those designing and implementing healthcare as well as for those providing the evidence informing practice. Yet the concept of multi-morbidity (the presence of >2 diseases) is a product of the design of health care systems which define health care need on the basis of disease status. So does the solution lie in an alternative model of healthcare? Strengthening generalist practice has been proposed as part of the solution to tackling multi-morbidity. Generalism is a professional philosophy of practice, deeply known to many practitioners, and described as expertise in whole person medicine. But generalism lacks the evidence base needed by policy makers and planners to support service redesign. The challenge is to fill this practice-research gap in order to critically explore if and when generalist care offers a robust alternative to management of this complex problem. We need practice-based evidence to fill this gap. By recognising generalist practice as a 'complex intervention' (intervening in a complex system), we outline an approach to evaluate impact using action-research principles. We highlight the implications for those who both commission and undertake research in order to tackle this problem. Answers to the complex problem of multi-morbidity won't come from doing more of the same. We need to change systems of care, and so the systems for generating evidence to support that care. This paper contributes to that work through outlining a process for generating practice-based evidence of generalist solutions to the complex problem of person-centred care for people with multi-morbidity.

  9. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  10. Prediction of Coal Face Gas Concentration by Multi-Scale Selective Ensemble Hybrid Modeling

    Directory of Open Access Journals (Sweden)

    WU Xiang

    2014-06-01

    Full Text Available A selective ensemble hybrid modeling prediction method based on wavelet transformation is proposed to improve the fitting and generalization capability of the existing prediction models of the coal face gas concentration, which has a strong stochastic volatility. Mallat algorithm was employed for the multi-scale decomposition and single-scale reconstruction of the gas concentration time series. Then, it predicted every subsequence by sparsely weighted multi unstable ELM(extreme learning machine predictor within method SERELM(sparse ensemble regressors of ELM. At last, it superimposed the predicted values of these models to obtain the predicted values of the original sequence. The proposed method takes advantage of characteristics of multi scale analysis of wavelet transformation, accuracy and fast characteristics of ELM prediction and the generalization ability of L1 regularized selective ensemble learning method. The results show that the forecast accuracy has large increase by using the proposed method. The average relative error is 0.65%, the maximum relative error is 4.16% and the probability of relative error less than 1% reaches 0.785.

  11. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  12. Short Term Strategies for a Dynamic Multi-Period Routing Problem

    NARCIS (Netherlands)

    Angelelli, E.; Bianchessi, N.; Mansini, R.; Speranza, M. G.

    2009-01-01

    We consider a Dynamic Multi-Period Routing Problem (DMPRP) faced by a company which deals with on-line pick-up requests and has to serve them by a fleet of uncapacitated vehicles over a finite time horizon. When a request is issued, a deadline of a given number of days d ≤ 2 is associated to it: if

  13. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  14. Occupancy statistics arising from weighted particle rearrangements

    International Nuclear Information System (INIS)

    Huillet, Thierry

    2007-01-01

    The box-occupancy distributions arising from weighted rearrangements of a particle system are investigated. In the grand-canonical ensemble, they are characterized by determinantal joint probability generating functions. For doubly non-negative weight matrices, fractional occupancy statistics, generalizing Fermi-Dirac and Bose-Einstein statistics, can be defined. A spatially extended version of these balls-in-boxes problems is investigated

  15. Intersection signal control multi-objective optimization based on genetic algorithm

    OpenAIRE

    Zhanhong Zhou; Ming Cai

    2014-01-01

    A signal control intersection increases not only vehicle delay, but also vehicle emissions and fuel consumption in that area. Because more and more fuel and air pollution problems arise recently, an intersection signal control optimization method which aims at reducing vehicle emissions, fuel consumption and vehicle delay is required heavily. This paper proposed a signal control multi-object optimization method to reduce vehicle emissions, fuel consumption and vehicle delay simultaneously at ...

  16. Nature Inspired Computational Technique for the Numerical Solution of Nonlinear Singular Boundary Value Problems Arising in Physiology

    Directory of Open Access Journals (Sweden)

    Suheel Abdullah Malik

    2014-01-01

    Full Text Available We present a hybrid heuristic computing method for the numerical solution of nonlinear singular boundary value problems arising in physiology. The approximate solution is deduced as a linear combination of some log sigmoid basis functions. A fitness function representing the sum of the mean square error of the given nonlinear ordinary differential equation (ODE and its boundary conditions is formulated. The optimization of the unknown adjustable parameters contained in the fitness function is performed by the hybrid heuristic computation algorithm based on genetic algorithm (GA, interior point algorithm (IPA, and active set algorithm (ASA. The efficiency and the viability of the proposed method are confirmed by solving three examples from physiology. The obtained approximate solutions are found in excellent agreement with the exact solutions as well as some conventional numerical solutions.

  17. Beyond harvests in the commons: multi-scale governance and turbulence in indigenous/community conserved areas in Oaxaca, Mexico

    Directory of Open Access Journals (Sweden)

    David Barton Bray

    2012-08-01

    Full Text Available Some important elements of common property theory include a focus on individual communities or user groups, local level adjudication of conflicts, local autonomy in rule making, physical harvests, and low levels of articulation with markets. We present a case study of multi-scale collective action around indigenous/community conserved areas (ICCAs in Oaxaca, Mexico that suggests a modification of these components of common property theory. A multi-community ICCA in Oaxaca demonstrates the importance of inter-community collective action as key link in multi-scale governance, that conflicts are often negotiated in multiple arenas, that rules emerge at multiple scales, and that management for conservation and environmental services implies no physical harvests. Realizing economic gains from ICCAs for strict conservation may require something very different than traditional natural resource management. It requires intense engagement with extensive networks of government and civil society actors and new forms of community and inter-community collection action, or multi-scale governance. Multi-scale governance is built on trust and social capital at multiple scales and also constitutes collective action at multiple scales. However, processes of multi-scale governance are also necessarily “turbulent” with actors frequently having conflicting values and goals to be negotiated. We present an analytic history of the process of emergence of community and inter-community collective action around strict conservation and examples of internal and external turbulence. We argue that this case study and other literature requires an extensions of the constitutive elements of common property theory.

  18. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  19. Fuzzy bicriteria multi-index transportation problems for coal allocation planning of Taipower

    International Nuclear Information System (INIS)

    Tzeng, G.-H.; Teodorvic, D.; Hwang, M.-J.

    1996-01-01

    Taipower, the official electricity authority of Taiwan, encounters several difficulties in planning annual coal purchase and allocation schedule, e.g. with multiple sources, multiple destinations, multiple coal types, different shipping vessels, and even an uncertain demand and supply. In this study, these concerns are formulated as a fuzzy bicriteria multi-index transportation problem. Furthermore, an effective and interactive algorithm is proposed which combines reducing index method and interactive fuzzy multi-objective linear programming technique to cope with a complicated problem which may be prevalent in other industries. Results obtained in this study clearly demonstrate that this model can not only satisfy more of the actual requirements of the integral system but also offer more information to the decision makers (DMs) for reference in favor of exalting decision making quality. 34 refs., 4 figs., 4 tabs

  20. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  1. NSGA-II algorithm for multi-objective generation expansion planning problem

    Energy Technology Data Exchange (ETDEWEB)

    Murugan, P.; Kannan, S. [Electronics and Communication Engineering Department, Arulmigu Kalasalingam College of Engineering, Krishnankoil 626190, Tamilnadu (India); Baskar, S. [Electrical Engineering Department, Thiagarajar College of Engineering, Madurai 625015, Tamilnadu (India)

    2009-04-15

    This paper presents an application of Elitist Non-dominated Sorting Genetic Algorithm version II (NSGA-II), to multi-objective generation expansion planning (GEP) problem. The GEP problem is considered as a two-objective problem. The first objective is the minimization of investment cost and the second objective is the minimization of outage cost (or maximization of reliability). To improve the performance of NSGA-II, two modifications are proposed. One modification is incorporation of Virtual Mapping Procedure (VMP), and the other is introduction of controlled elitism in NSGA-II. A synthetic test system having 5 types of candidate units is considered here for GEP for a 6-year planning horizon. The effectiveness of the proposed modifications is illustrated in detail. (author)

  2. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Staebler, G. M.; Candy, J. [General Atomics, San Diego, California 92186 (United States); Howard, N. T. [Oak Ridge Institute for Science Education (ORISE), Oak Ridge, Tennessee 37831 (United States); Holland, C. [University of California San Diego, San Diego, California 92093 (United States)

    2016-06-15

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the threshold for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.

  3. Enhanced thermoelectric properties in p-type Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} alloy by combining incorporation and doping using multi-scale CuAlO{sub 2} particles

    Energy Technology Data Exchange (ETDEWEB)

    Song, Zijun; Liu, Yuan; Zhou, Zhenxing; Lu, Xiaofang; Wang, Lianjun [State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, Shanghai (China); Institute of Functional Materials, Donghua University, Shanghai (China); Zhang, Qihao [State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai (China); University of Chinese Academy of Sciences, Beijing (China); Jiang, Wan [State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, Shanghai (China); Institute of Functional Materials, Donghua University, Shanghai (China); School of Material Science and Engineering, Jingdezhen Ceramic Institute, Jingdezhen (China); Chen, Lidong [State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai (China)

    2017-01-15

    Multi-scale CuAlO{sub 2} particles are introduced into the Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} matrix to synergistically optimize the electrical conductivity, Seebeck coefficient, and the lattice thermal conductivity. Cu element originating from fine CuAlO{sub 2} grains diffuses into the Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} matrix and tunes the carrier concentration while the coarse CuAlO{sub 2} particles survive as the second phase within the matrix. The power factor is improved at the whole temperatures range due to the low-energy electron filtering effect on Seebeck coefficient and enhanced electrical transport property by mild Cu doping. Meanwhile, the remaining CuAlO{sub 2} inclusions give rise to more boundaries and newly built interfaces scattering of heat-carrying phonons, resulting in the reduced lattice thermal conductivity. Consequently, the maximum ZT is found to be enhanced by 150% arising from the multi-scale microstructure regulation when the CuAlO{sub 2} content reaches 0.6 vol.%. Not only that, but the ZT curves get flat in the whole temperature range after introducing the multi-scale CuAlO{sub 2} particles, which leads to a remarkable increase in the average ZT. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. JEMMRLA - Electron Model of a Muon RLA with Multi-pass Arcs

    Energy Technology Data Exchange (ETDEWEB)

    Bogacz, Slawomir Alex; Krafft, Geoffrey A.; Morozov, Vasiliy S.; Roblin, Yves R.

    2013-06-01

    We propose a demonstration experiment for a new concept of a 'dogbone' RLA with multi-pass return arcs -- JEMMRLA (Jlab Electron Model of Muon RLA). Such an RLA with linear-field multi-pass arcs was introduced for rapid acceleration of muons for the next generation of Muon Facilities. It allows for efficient use of expensive RF while the multi-pass arc design based on linear combined-function magnets exhibits a number of advantages over separate-arc or pulsed-arc designs. Here we describe a test of this concept by scaling a GeV scale muon design for electrons. Scaling muon momenta by the muon-to-electron mass ratio leads to a scheme, in which a 4.5 MeV electron beam is injected in the middle of a 3 MeV/pass linac with two double-pass return arcs and is accelerated to 18 MeV in 4.5 passes. All spatial dimensions including the orbit distortion are scaled by a factor of 7.5, which arises from scaling the 200 MHz muon RF to a readily available 1.5 GHz. The hardware requirements are not very demanding making it straightforward to implement. Such an RLA may have applications going beyond muon acceleration: in medical isotope production, radiation cancer therapy and homeland security.

  5. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  6. Based on a multi-agent system for multi-scale simulation and application of household's LUCC: a case study for Mengcha village, Mizhi county, Shaanxi province.

    Science.gov (United States)

    Chen, Hai; Liang, Xiaoying; Li, Rui

    2013-01-01

    Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the

  7. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    Science.gov (United States)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  8. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  9. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    Energy Technology Data Exchange (ETDEWEB)

    Stander, Nielen [Livermore Software Technology Corporation, CA (United States); Basudhar, Anirban [Livermore Software Technology Corporation, CA (United States); Basu, Ushnish [Livermore Software Technology Corporation, CA (United States); Gandikota, Imtiaz [Livermore Software Technology Corporation, CA (United States); Savic, Vesna [General Motors, Flint, MI (United States); Sun, Xin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, XiaoHua [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Pourboghrat, Farhang [The Ohio State Univ., Columbus, OH (United States); Park, Taejoon [The Ohio State Univ., Columbus, OH (United States); Mapar, Aboozar [Michigan State Univ., East Lansing, MI (United States); Kumar, Sharvan [Brown Univ., Providence, RI (United States); Ghassemi-Armaki, Hassan [Brown Univ., Providence, RI (United States); Abu-Farha, Fadi [Clemson Univ., SC (United States)

    2015-06-15

    Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.

  10. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Li Ding

    2018-01-01

    Full Text Available In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. Finally, we give three examples to demonstrate the applicability of our obtained results.

  11. Nonlinear diffusion problem arising in plasma physics

    International Nuclear Information System (INIS)

    Berryman, J.G.; Holland, C.J.

    1978-01-01

    In earlier studies of plasma diffusion with Okuda-Dawson scaling (D approx. n/sup -1/2/), perturbation theory indicated that arbitrary initial data should evolve rapidly toward the separation solution of the relevant nonlinear diffusion equation. Now a Lyapunov functional has been found which is strictly decreasing in time and bounded below. The rigorous proof that arbitrary initial data evolve toeard the separable solution is summarized. Rigorous bounds on the decay time are also presented

  12. Multi-scale Dynamical Processes in Space and Astrophysical Plasmas

    CERN Document Server

    Vörös, Zoltán; IAFA 2011 - International Astrophysics Forum 2011 : Frontiers in Space Environment Research

    2012-01-01

    Magnetized plasmas in the universe exhibit complex dynamical behavior over a huge range of scales. The fundamental mechanisms of energy transport, redistribution and conversion occur at multiple scales. The driving mechanisms often include energy accumulation, free-energy-excited relaxation processes, dissipation and self-organization. The plasma processes associated with energy conversion, transport and self-organization, such as magnetic reconnection, instabilities, linear and nonlinear waves, wave-particle interactions, dynamo processes, turbulence, heating, diffusion and convection represent fundamental physical effects. They demonstrate similar dynamical behavior in near-Earth space, on the Sun, in the heliosphere and in astrophysical environments. 'Multi-scale Dynamical Processes in Space and Astrophysical Plasmas' presents the proceedings of the International Astrophysics Forum Alpbach 2011. The contributions discuss the latest advances in the exploration of dynamical behavior in space plasmas environm...

  13. Convex solutions of systems arising from Monge-Ampere equations

    Directory of Open Access Journals (Sweden)

    Haiyan Wang

    2009-10-01

    Full Text Available We establish two criteria for the existence of convex solutions to a boundary value problem for weakly coupled systems arising from the Monge-Ampère equations. We shall use fixed point theorems in a cone.

  14. Study on Multi-Depot Collaborative Transportation Problem of Milk-Run Pattern

    Directory of Open Access Journals (Sweden)

    Lou Zhenkai

    2016-01-01

    Full Text Available Analyze the relevance between Milk Run mode and collaborative transportation problem, put forward collaborative transportation problem of multiple-depot on Milk Run mode under the supply and demand separate nodes, consider the value of transport and transport costs, introduce the concept of node - arc flow, by comparing the size of traffic flow determine nodes collection, and then constructed multi-transport model of the problem. Considering one-way pickup and delivery closed, construct two-stage algorithm model, use dynamic programming recursive solution to determine the best route to pick up, and then solving delivery routing problem with different start and return point based on geometric method of Cosine. Finally use a numerical example illustrates the effectiveness of the algorithm and reasonable model.

  15. Nonlinear triple-point problems on time scales

    Directory of Open Access Journals (Sweden)

    Douglas R. Anderson

    2004-04-01

    Full Text Available We establish the existence of multiple positive solutions to the nonlinear second-order triple-point boundary-value problem on time scales, $$displaylines{ u^{Delta abla}(t+h(tf(t,u(t=0, cr u(a=alpha u(b+delta u^Delta(a,quad eta u(c+gamma u^Delta(c=0 }$$ for $tin[a,c]subsetmathbb{T}$, where $mathbb{T}$ is a time scale, $eta, gamma, deltage 0$ with $Beta+gamma>0$, $0

  16. High-Temperature Tolerance in Multi-Scale Cermet Solar-Selective Absorbing Coatings Prepared by Laser Cladding.

    Science.gov (United States)

    Pang, Xuming; Wei, Qian; Zhou, Jianxin; Ma, Huiyang

    2018-06-19

    In order to achieve cermet-based solar absorber coatings with long-term thermal stability at high temperatures, a novel single-layer, multi-scale TiC-Ni/Mo cermet coating was first prepared using laser cladding technology in atmosphere. The results show that the optical properties of the cermet coatings using laser cladding were much better than the preplaced coating. In addition, the thermal stability of the optical properties for the laser cladding coating were excellent after annealing at 650 °C for 200 h. The solar absorptance and thermal emittance of multi-scale cermet coating were 85% and 4.7% at 650 °C. The results show that multi-scale cermet materials are more suitable for solar-selective absorbing coating. In addition, laser cladding is a new technology that can be used for the preparation of spectrally-selective coatings.

  17. High-Temperature Tolerance in Multi-Scale Cermet Solar-Selective Absorbing Coatings Prepared by Laser Cladding

    Directory of Open Access Journals (Sweden)

    Xuming Pang

    2018-06-01

    Full Text Available In order to achieve cermet-based solar absorber coatings with long-term thermal stability at high temperatures, a novel single-layer, multi-scale TiC-Ni/Mo cermet coating was first prepared using laser cladding technology in atmosphere. The results show that the optical properties of the cermet coatings using laser cladding were much better than the preplaced coating. In addition, the thermal stability of the optical properties for the laser cladding coating were excellent after annealing at 650 °C for 200 h. The solar absorptance and thermal emittance of multi-scale cermet coating were 85% and 4.7% at 650 °C. The results show that multi-scale cermet materials are more suitable for solar-selective absorbing coating. In addition, laser cladding is a new technology that can be used for the preparation of spectrally-selective coatings.

  18. Small Scale Problems of the ΛCDM Model: A Short Review

    Directory of Open Access Journals (Sweden)

    Antonino Del Popolo

    2017-02-01

    Full Text Available The ΛCDM model, or concordance cosmology, as it is often called, is a paradigm at its maturity. It is clearly able to describe the universe at large scale, even if some issues remain open, such as the cosmological constant problem, the small-scale problems in galaxy formation, or the unexplained anomalies in the CMB. ΛCDM clearly shows difficulty at small scales, which could be related to our scant understanding, from the nature of dark matter to that of gravity; or to the role of baryon physics, which is not well understood and implemented in simulation codes or in semi-analytic models. At this stage, it is of fundamental importance to understand whether the problems encountered by the ΛDCM model are a sign of its limits or a sign of our failures in getting the finer details right. In the present paper, we will review the small-scale problems of the ΛCDM model, and we will discuss the proposed solutions and to what extent they are able to give us a theory accurately describing the phenomena in the complete range of scale of the observed universe.

  19. Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kurtz, Nolan Scot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance are investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.

  20. Fuzzy Multi Objective Linear Programming Problem with Imprecise Aspiration Level and Parameters

    Directory of Open Access Journals (Sweden)

    Zahra Shahraki

    2015-07-01

    Full Text Available This paper considers the multi-objective linear programming problems with fuzzygoal for each of the objective functions and constraints. Most existing works deal withlinear membership functions for fuzzy goals. In this paper, exponential membershipfunction is used.

  1. Bio-stimuli-responsive multi-scale hyaluronic acid nanoparticles for deepened tumor penetration and enhanced therapy.

    Science.gov (United States)

    Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole

    2017-09-01

    In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Multi-scale modeling strategies in materials science

    Indian Academy of Sciences (India)

    The problem of prediction of finite temperature properties of materials poses great computational challenges. The computational treatment of the multitude of length and time scales involved in determining macroscopic properties has been attempted by several workers with varying degrees of success. This paper will review ...

  3. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    Science.gov (United States)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  4. Multi-scale Analysis of MEMS Sensors Subject to Drop Impacts

    Directory of Open Access Journals (Sweden)

    Sarah Zerbini

    2007-09-01

    Full Text Available The effect of accidental drops on MEMS sensors are examined within the frame-work of a multi-scale finite element approach. With specific reference to a polysilicon MEMSaccelerometer supported by a naked die, the analysis is decoupled into macro-scale (at dielength-scale and meso-scale (at MEMS length-scale simulations, accounting for the verysmall inertial contribution of the sensor to the overall dynamics of the device. Macro-scaleanalyses are adopted to get insights into the link between shock waves caused by the impactagainst a target surface and propagating inside the die, and the displacement/acceleration his-tories at the MEMS anchor points. Meso-scale analyses are adopted to detect the most stresseddetails of the sensor and to assess whether the impact can lead to possible localized failures.Numerical results show that the acceleration at sensor anchors cannot be considered an ob-jective indicator for drop severity. Instead, accurate analyses at sensor level are necessary toestablish how MEMS can fail because of drops.

  5. Indoor radon problem in energy efficient multi-storey buildings.

    Science.gov (United States)

    Yarmoshenko, I V; Vasilyev, A V; Onishchenko, A D; Kiselev, S M; Zhukovsky, M V

    2014-07-01

    Modern energy-efficient architectural solutions and building construction technologies such as monolithic concrete structures in combination with effective insulation reduce air permeability of building envelope. As a result, air exchange rate is significantly reduced and conditions for increased radon accumulation in indoor air are created. Based on radon survey in Ekaterinburg, Russia, remarkable increase in indoor radon concentration level in energy-efficient multi-storey buildings was found in comparison with similar buildings constructed before the-energy-saving era. To investigate the problem of indoor radon in energy-efficient multi-storey buildings, the measurements of radon concentration have been performed in seven modern buildings using radon monitoring method. Values of air exchange rate and other parameters of indoor climate in energy-efficient buildings have been estimated. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    International Nuclear Information System (INIS)

    Tian, Zhen; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B.; Peng, Fei

    2015-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  7. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun; Jia, Xun, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Jiang, Steve B., E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Peng, Fei [Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  8. Estimating Vegetation Primary Production in the Heihe River Basin of China with Multi-Source and Multi-Scale Data.

    Directory of Open Access Journals (Sweden)

    Tianxiang Cui

    Full Text Available Estimating gross primary production (GPP and net primary production (NPP are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR and its fraction absorbed by vegetation (FPAR using a light use efficiency (LUE model. The autotrophic respiration (Ra was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2 d(-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2 d(-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of

  9. Study of multi-functional precision optical measuring system for large scale equipment

    Science.gov (United States)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  10. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    Science.gov (United States)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  11. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  12. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  13. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  14. Cooperative multi-robot observation of multiple moving targets

    International Nuclear Information System (INIS)

    Parker, L.E.; Emmons, B.A.

    1997-01-01

    An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring, or observing, the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement--determining where sensors should be located to maintain the targets in view. In complex applications of this type, the use of multiple sensors dynamically moving over time is required. In this paper, the authors investigate the sue of a cooperative team of autonomous sensor-based robots for multi-robot observation of multiple moving targets. They focus primarily on developing the distributed control strategies that allow the robot team to attempt to maximize the collective tie during which each object is being observed by at least one robot in the area of interest. The initial efforts in this problem address the aspects of distributed control in homogeneous robot teams with equivalent sensing and movement capabilities working in an uncluttered, bounded area. This paper first formalizes the problem, discusses related work, and then shows that this problem is NP-hard. They then present a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level control

  15. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    Science.gov (United States)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in

  16. Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution

    Science.gov (United States)

    Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa

    2018-03-01

    The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.

  17. Manifold regularized matrix completion for multi-label learning with ADMM.

    Science.gov (United States)

    Liu, Bin; Li, Yingming; Xu, Zenglin

    2018-05-01

    Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Triple solutions for multi-point boundary-value problem with p-Laplace operator

    Directory of Open Access Journals (Sweden)

    Yansheng Liu

    2009-11-01

    Full Text Available Using a fixed point theorem due to Avery and Peterson, this article shows the existence of solutions for multi-point boundary-value problem with p-Laplace operator and parameters. Also, we present an example to illustrate the results obtained.

  19. An Integer Programming Model for Multi-Echelon Supply Chain Decision Problem Considering Inventories

    Science.gov (United States)

    Harahap, Amin; Mawengkang, Herman; Siswadi; Effendi, Syahril

    2018-01-01

    In this paper we address a problem that is of significance to the industry, namely the optimal decision of a multi-echelon supply chain and the associated inventory systems. By using the guaranteed service approach to model the multi-echelon inventory system, we develop a mixed integer; programming model to simultaneously optimize the transportation, inventory and network structure of a multi-echelon supply chain. To solve the model we develop a direct search approach using a strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points.

  20. High performance multi-scale and multi-physics computation of nuclear power plant subjected to strong earthquake. An Overview

    International Nuclear Information System (INIS)

    Yoshimura, Shinobu; Kawai, Hiroshi; Sugimoto, Shin'ichiro; Hori, Muneo; Nakajima, Norihiro; Kobayashi, Kei

    2010-01-01

    Recently importance of nuclear energy has been recognized again due to serious concerns of global warming and energy security. In parallel, it is one of critical issues to verify safety capability of ageing nuclear power plants (NPPs) subjected to strong earthquake. Since 2007, we have been developing the multi-scale and multi-physics based numerical simulator for quantitatively predicting actual quake-proof capability of ageing NPPs under operation or just after plant trip subjected to strong earthquake. In this paper, we describe an overview of the simulator with some preliminary results. (author)

  1. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    Science.gov (United States)

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  2. A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems

    NARCIS (Netherlands)

    Hamdy, M.; Nguyen, A.T. (Anh Tuan); Hensen, J.L.M.

    2016-01-01

    Integrated building design is inherently a multi-objective optimization problem where two or more conflicting objectives must be minimized and/or maximized concurrently. Many multi-objective optimization algorithms have been developed; however few of them are tested in solving building design

  3. Multi-Scale Modelling of Fatigue of Wind Turbine Rotor Blade Composites

    NARCIS (Netherlands)

    Qian, C.

    2013-01-01

    In this research, extensive fatigue tests were performed on single glass fibres and composite coupons. Comparison of the test results shows that there is a significant difference between the fibre and composite fatigue behaviour. In order to clarify this difference, a multi-scale micro-mechanical

  4. Characterization of two-scale gradient Young measures and application to homogenization

    OpenAIRE

    Babadjian, Jean-Francois; Baia, Margarida; Santos, Pedro M.

    2006-01-01

    This work is devoted to the study of two-scale gradient Young measures naturally arising in nonlinear elasticity homogenization problems. Precisely, a characterization of this class of measures is derived and an integral representation formula for homogenized energies, whose integrands satisfy very weak regularity assumptions, is obtained in terms of two-scale gradient Young measures.

  5. Landslide mapping with multi-scale object-based image analysis – a case study in the Baichi watershed, Taiwan

    Directory of Open Access Journals (Sweden)

    T. Lahousse

    2011-10-01

    Full Text Available We developed a multi-scale OBIA (object-based image analysis landslide detection technique to map shallow landslides in the Baichi watershed, Taiwan, after the 2004 Typhoon Aere event. Our semi-automated detection method selected multiple scales through landslide size statistics analysis for successive classification rounds. The detection performance achieved a modified success rate (MSR of 86.5% with the training dataset and 86% with the validation dataset. This performance level was due to the multi-scale aspect of our methodology, as the MSR for single scale classification was substantially lower, even after spectral difference segmentation, with a maximum of 74%. Our multi-scale technique was capable of detecting landslides of varying sizes, including very small landslides, up to 95 m2. The method presented certain limitations: the thresholds we established for classification were specific to the study area, to the landslide type in the study area, and to the spectral characteristics of the satellite image. Because updating site-specific and image-specific classification thresholds is easy with OBIA software, our multi-scale technique is expected to be useful for mapping shallow landslides at watershed level.

  6. Multi-level nonlinear diffusion acceleration method for multigroup transport k-Eigenvalue problems

    International Nuclear Information System (INIS)

    Anistratov, Dmitriy Y.

    2011-01-01

    The nonlinear diffusion acceleration (NDA) method is an efficient and flexible transport iterative scheme for solving reactor-physics problems. This paper presents a fast iterative algorithm for solving multigroup neutron transport eigenvalue problems in 1D slab geometry. The proposed method is defined by a multi-level system of equations that includes multigroup and effective one-group low-order NDA equations. The Eigenvalue is evaluated in the exact projected solution space of smallest dimensionality, namely, by solving the effective one- group eigenvalue transport problem. Numerical results that illustrate performance of the new algorithm are demonstrated. (author)

  7. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    Science.gov (United States)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  8. Solving multi-objective job shop problem using nature-based algorithms: new Pareto approximation features

    Directory of Open Access Journals (Sweden)

    Jarosław Rudy

    2015-01-01

    Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.

  9. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  10. Fast optimal wavefront reconstruction for multi-conjugate adaptive optics using the Fourier domain preconditioned conjugate gradient algorithm.

    Science.gov (United States)

    Vogel, Curtis R; Yang, Qiang

    2006-08-21

    We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.

  11. Response of Moist Convection to Multi-scale Surface Flux Heterogeneity

    Science.gov (United States)

    Kang, S. L.; Ryu, J. H.

    2015-12-01

    We investigate response of moist convection to multi-scale feature of the spatial variation of surface sensible heat fluxes (SHF) in the afternoon evolution of the convective boundary layer (CBL), utilizing a mesoscale-domain large eddy simulation (LES) model. The multi-scale surface heterogeneity feature is analytically created as a function of the spectral slope in the wavelength range from a few tens of km to a few hundreds of m in the spectrum of surface SHF on a log-log scale. The response of moist convection to the κ-3 - slope (where κ is wavenumber) surface SHF field is compared with that to the κ-2 - slope surface, which has a relatively weak mesoscale feature, and the homogeneous κ0 - slope surface. Given the surface energy balance with a spatially uniform available energy, the prescribed SHF has a 180° phase lag with the latent heat flux (LHF) in a horizontal domain of (several tens of km)2. Thus, warmer (cooler) surface is relatively dry (moist). For all the cases, the same observation-based sounding is prescribed for the initial condition. For all the κ-3 - slope surface heterogeneity cases, early non-precipitating shallow clouds further develop into precipitating deep thunderstorms. But for all the κ-2 - slope cases, only shallow clouds develop. We compare the vertical profiles of domain-averaged fluxes and variances, and the contribution of the mesoscale and turbulence contributions to the fluxes and variances, between the κ-3 versus κ-2 slope cases. Also the cross-scale processes are investigated.

  12. Applying DLM and DCM concepts in a multi-scale data environment

    NARCIS (Netherlands)

    Stoter, Jantien; Meijers, Martijn; van Oosterom, Peter J.M.; Grünreich, Dietmar; Kraak, Menno-Jan

    2010-01-01

    This extended abstract presents work in progress in which we explore the DLM and DCM concepts in a multi-scale topographic data environment. The abstract is prepared as input for the Symposium on Generalisation and Data Integration (GDI), University of Colorado, Boulder, 20-22 June 2010.

  13. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    Science.gov (United States)

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  14. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  15. An Innovative Heuristic in Multi-Item Replenishment Problem for One Warehouse and N Retailers

    Directory of Open Access Journals (Sweden)

    Yugowati Praharsi

    2014-01-01

    Full Text Available Joint replenishment problem (JRP is a type of inventory model which aims to minimize the total inventory cost consisting of major ordering cost, minor ordering cost and inventory holding cost. Different from previous papers, this study considers one warehouse, multi items and N retailers. An innovative heuristic approach is developed to solve the problem. In this paper, we consider a multi echelon inventory system and seek to find a balance between the order cost and the inventory holding costs at each installation. The computational results show that the innovative heuristic provides a near exact optimal solution, but is more efficient in terms of the computational time and the iteration number.

  16. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    Energy Technology Data Exchange (ETDEWEB)

    Philip, Bobby, E-mail: philipb@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Dilts, Gary A. [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States)

    2015-04-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  17. Multi-physics and multi-scale characterization of shale anisotropy

    Science.gov (United States)

    Sarout, J.; Nadri, D.; Delle Piane, C.; Esteban, L.; Dewhurst, D.; Clennell, M. B.

    2012-12-01

    Shales are the most abundant sedimentary rock type in the Earth's shallow crust. In the past decade or so, they have attracted increased attention from the petroleum industry as reservoirs, as well as more traditionally for their sealing capacity for hydrocarbon/CO2 traps or underground waste repositories. The effectiveness of both fundamental and applied shale research is currently limited by (i) the extreme variability of physical, mechanical and chemical properties observed for these rocks, and by (ii) the scarce data currently available. The variability in observed properties is poorly understood due to many factors that are often irrelevant for other sedimentary rocks. The relationships between these properties and the petrophysical measurements performed at the field and laboratory scales are not straightforward, translating to a scale dependency typical of shale behaviour. In addition, the complex and often anisotropic micro-/meso-structures of shales give rise to a directional dependency of some of the measured physical properties that are tensorial by nature such as permeability or elastic stiffness. Currently, fundamental understanding of the parameters controlling the directional and scale dependency of shale properties is far from complete. Selected results of a multi-physics laboratory investigation of the directional and scale dependency of some critical shale properties are reported. In particular, anisotropic features of shale micro-/meso-structures are related to the directional-dependency of elastic and fluid transport properties: - Micro-/meso-structure (μm to cm scale) characterization by electron microscopy and X-ray tomography; - Estimation of elastic anisotropy parameters on a single specimen using elastic wave propagation (cm scale); - Estimation of the permeability tensor using the steady-state method on orthogonal specimens (cm scale); - Estimation of the low-frequency diffusivity tensor using NMR method on orthogonal specimens (example

  18. Development of an Efficient Meso- scale Multi-phase Flow Solver in Nuclear Applications

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taehun [City Univ. (CUNY), NY (United States)

    2015-10-20

    The proposed research aims at formulating a predictive high-order Lattice Boltzmann Equation for multi-phase flows relevant to nuclear energy related application - namely, saturated and sub-cooled boiling in reactors, and liquid- liquid mixing and extraction for fuel cycle separation. An efficient flow solver will be developed based on the Finite Element based Lattice Boltzmann Method (FE- LBM), accounting for phase-change heat transfer and capable of treating multiple phases over length scales from the submicron to the meter. A thermal LBM will be developed in order to handle adjustable Prandtl number, arbitrary specific heat ratio, a wide range of temperature variations, better numerical stability during liquid-vapor phase change, and full thermo-hydrodynamic consistency. Two-phase FE-LBM will be extended to liquid–liquid–gas multi-phase flows for application to high-fidelity simulations building up from the meso-scale up to the equipment sub-component scale. While several relevant applications exist, the initial applications for demonstration of the efficient methods to be developed as part of this project include numerical investigations of Critical Heat Flux (CHF) phenomena in nuclear reactor fuel bundles, and liquid-liquid mixing and interfacial area generation for liquid-liquid separations. In addition, targeted experiments will be conducted for validation of this advanced multi-phase model.

  19. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  20. Applying the global RCP-SSP-SPA scenario framework at sub-national scale: A multi-scale and participatory scenario approach.

    Science.gov (United States)

    Kebede, Abiy S; Nicholls, Robert J; Allan, Andrew; Arto, Iñaki; Cazcarro, Ignacio; Fernandes, Jose A; Hill, Chris T; Hutton, Craig W; Kay, Susan; Lázár, Attila N; Macadam, Ian; Palmer, Matthew; Suckall, Natalie; Tompkins, Emma L; Vincent, Katharine; Whitehead, Paul W

    2018-09-01

    To better anticipate potential impacts of climate change, diverse information about the future is required, including climate, society and economy, and adaptation and mitigation. To address this need, a global RCP (Representative Concentration Pathways), SSP (Shared Socio-economic Pathways), and SPA (Shared climate Policy Assumptions) (RCP-SSP-SPA) scenario framework has been developed by the Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5). Application of this full global framework at sub-national scales introduces two key challenges: added complexity in capturing the multiple dimensions of change, and issues of scale. Perhaps for this reason, there are few such applications of this new framework. Here, we present an integrated multi-scale hybrid scenario approach that combines both expert-based and participatory methods. The framework has been developed and applied within the DECCMA 1 project with the purpose of exploring migration and adaptation in three deltas across West Africa and South Asia: (i) the Volta delta (Ghana), (ii) the Mahanadi delta (India), and (iii) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India). Using a climate scenario that encompasses a wide range of impacts (RCP8.5) combined with three SSP-based socio-economic scenarios (SSP2, SSP3, SSP5), we generate highly divergent and challenging scenario contexts across multiple scales against which robustness of the human and natural systems within the deltas are tested. In addition, we consider four distinct adaptation policy trajectories: Minimum intervention, Economic capacity expansion, System efficiency enhancement, and System restructuring, which describe alternative future bundles of adaptation actions/measures under different socio-economic trajectories. The paper highlights the importance of multi-scale (combined top-down and bottom-up) and participatory (joint expert-stakeholder) scenario methods for addressing uncertainty in adaptation decision

  1. Effects of multi-stakeholder platforms on multi-stakeholder innovation networks: Implications for research for development interventions targeting innovations at scale

    Science.gov (United States)

    Schut, Marc; Hermans, Frans; van Asten, Piet; Leeuwis, Cees

    2018-01-01

    Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs–local-level actors–left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts. PMID:29870559

  2. Effects of multi-stakeholder platforms on multi-stakeholder innovation networks: Implications for research for development interventions targeting innovations at scale.

    Science.gov (United States)

    Sartas, Murat; Schut, Marc; Hermans, Frans; Asten, Piet van; Leeuwis, Cees

    2018-01-01

    Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs-local-level actors-left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts.

  3. Stability of multi-objective bi-level linear programming problems under fuzziness

    Directory of Open Access Journals (Sweden)

    Abo-Sinna Mahmoud A.

    2013-01-01

    Full Text Available This paper deals with multi-objective bi-level linear programming problems under fuzzy environment. In the proposed method, tentative solutions are obtained and evaluated by using the partial information on preference of the decision-makers at each level. The existing results concerning the qualitative analysis of some basic notions in parametric linear programming problems are reformulated to study the stability of multi-objective bi-level linear programming problems. An algorithm for obtaining any subset of the parametric space, which has the same corresponding Pareto optimal solution, is presented. Also, this paper established the model for the supply-demand interaction in the age of electronic commerce (EC. First of all, the study uses the individual objectives of both parties as the foundation of the supply-demand interaction. Subsequently, it divides the interaction, in the age of electronic commerce, into the following two classifications: (i Market transactions, with the primary focus on the supply demand relationship in the marketplace; and (ii Information service, with the primary focus on the provider and the user of information service. By applying the bi-level programming technique of interaction process, the study will develop an analytical process to explain how supply-demand interaction achieves a compromise or why the process fails. Finally, a numerical example of information service is provided for the sake of illustration.

  4. Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Day-Lewis, Frederick; Singha, Kamini; Haggerty, Roy; Johnson, Tim; Binley, Andrew; Lane, John

    2014-01-16

    -part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area. In a synergistic add-on to our workplan, we analyzed data from field experiments performed at the DOE Naturita Site under a separate DOE SBR grant, on which PI Day-Lewis served as co-PI. Techniques developed for application to Hanford datasets also were applied to data from Naturita. 1. Introduction The Department of Energy (DOE) faces enormous scientific and engineering challenges associated with the remediation of legacy contamination at former nuclear weapons production facilities. Selection, design and optimization of appropriate site remedies (e.g., pump-and-treat, biostimulation, or monitored natural attenuation) requires reliable predictive models of radionuclide fate and transport; however, our current modeling capabilities are limited by an incomplete understanding of multi-scale mass transfer—its rates, scales, and the heterogeneity of controlling parameters. At many DOE sites, long “tailing” behavior, concentration rebound, and slower-than-expected cleanup are observed; these observations are all consistent with multi-scale mass transfer [Haggerty and Gorelick, 1995; Haggerty et al., 2000; 2004], which renders pump-and-treat remediation and biotransformation inefficient and slow [Haggerty and Gorelick, 1994; Harvey et al., 1994; Wilson, 1997]. Despite the importance of mass transfer, there are significant uncertainties associated with controlling parameters, and the prevalence of mass transfer remains a point of debate [e.g., Hill et al., 2006; Molz et al., 2006] for lack of experimental methods to verify and measure it in situ or independently of tracer breakthrough. There is a critical need for new field-experimental techniques to

  5. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  6. Characterizing Co-movements between Indian and Emerging Asian Equity Markets through Wavelet Multi-Scale Analysis

    Directory of Open Access Journals (Sweden)

    Aasif Shah

    2015-06-01

    Full Text Available Multi-scale representations are effective in characterising the time-frequency characteristics of financial return series. They have the capability to reveal the properties not evident with typical time domain analysis. Given the aforesaid, this study derives crucial insights from multi scale analysis to investigate the co- movements between Indian and emerging Asian equity markets using wavelet correlation and wavelet coherence measures. It is reported that the Indian equity market is strongly integrated with Asian equity markets at lower frequency scales and relatively less blended at higher frequencies. On the other hand the results from cross correlations suggest that the lead-lag relationship becomes substantial as we turn to lower frequency scales and finally, wavelet coherence demonstrates that this correlation eventually grows strong in the interim of the crises period at lower frequency scales. Overall the findings are relevant and have strong policy and practical implications.

  7. Dressed skeleton expansion and the coupling scale ambiguity problem

    International Nuclear Information System (INIS)

    Lu, Hung Jung.

    1992-09-01

    Perturbative expansions in quantum field theories are usually expressed in powers of a coupling constant. In principle, the infinite sum of the expansion series is independent of the renormalization scale of the coupling constant. In practice, there is a remnant dependence of the truncated series on the renormalization scale. This scale ambiguity can severely restrict the predictive power of theoretical calculations. The dressed skeleton expansion is developed as a calculational method which avoids the coupling scale ambiguity problem. In this method, physical quantities are expressed as functional expansions in terms of a coupling vertex function. The arguments of the vertex function are given by the physical momenta of each process. These physical momenta effectively replace the unspecified renormalization scale and eliminate the ambiguity problem. This method is applied to various field theoretical models and its main features and limitations are explored. For quantum chromodynamics, an expression for the running coupling constant of the three-gluon vertex is obtained. The effective coupling scale of this vertex is shown to be essentially given by μ 2 ∼ Q min 2 Q med 2 /Q max 2 where Q min 2 Q med 2 /Q max 2 are respectively the smallest, the next-to-smallest and the largest scale among the three gluon virtualities. This functional form suggests that the three-gluon vertex becomes non-perturbative at asymmetric momentum configurations. Implications for four-jet physics is discussed

  8. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  9. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    Science.gov (United States)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  10. Mitigating and adapting to climate change: multi-functional and multi-scale assessment of green urban infrastructure.

    Science.gov (United States)

    Demuzere, M; Orru, K; Heidrich, O; Olazabal, E; Geneletti, D; Orru, H; Bhave, A G; Mittal, N; Feliu, E; Faehnle, M

    2014-12-15

    In order to develop climate resilient urban areas and reduce emissions, several opportunities exist starting from conscious planning and design of green (and blue) spaces in these landscapes. Green urban infrastructure has been regarded as beneficial, e.g. by balancing water flows, providing thermal comfort. This article explores the existing evidence on the contribution of green spaces to climate change mitigation and adaptation services. We suggest a framework of ecosystem services for systematizing the evidence on the provision of bio-physical benefits (e.g. CO2 sequestration) as well as social and psychological benefits (e.g. improved health) that enable coping with (adaptation) or reducing the adverse effects (mitigation) of climate change. The multi-functional and multi-scale nature of green urban infrastructure complicates the categorization of services and benefits, since in reality the interactions between various benefits are manifold and appear on different scales. We will show the relevance of the benefits from green urban infrastructures on three spatial scales (i.e. city, neighborhood and site specific scales). We will further report on co-benefits and trade-offs between the various services indicating that a benefit could in turn be detrimental in relation to other functions. The manuscript identifies avenues for further research on the role of green urban infrastructure, in different types of cities, climates and social contexts. Our systematic understanding of the bio-physical and social processes defining various services allows targeting stressors that may hamper the provision of green urban infrastructure services in individual behavior as well as in wider planning and environmental management in urban areas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Patterns of Change in Collaboration Are Associated with Baseline Characteristics and Predict Outcome and Dropout Rates in Treatment of Multi-Problem Families. A Validation Study

    Directory of Open Access Journals (Sweden)

    Egon Bachler

    2017-07-01

    Full Text Available Objective: The present study validates the Multi-Problem Family (MPF-Collaboration Scale, which measures the progress of goal directed collaboration of patients in the treatment of families with MPF and its relation to drop-out rates and treatment outcome.Method: Naturalistic study of symptom and competence-related changes in children of ages 4–18 and their caregivers.Setting: Integrative, structural outreach family therapy.Measures: The data of five different groups of goal directed collaboration (deteriorating collaboration, stable low collaboration, stable medium collaboration, stable high collaboration, improving collaboration were analyzed in their relation to treatment expectation, individual therapeutic goals (ITG, family adversity index, severity of problems and global assessment of a caregiver’s functioning, child, and relational aspects.Results: From N = 810 families, 20% displayed stable high collaboration (n = 162 and 21% had a pattern of improving collaboration. The families with stable high or improving collaboration rates achieved significantly more progress throughout therapy in terms of treatment outcome expectancy (d = 0.96; r = 0.43, reaching ITG (d = 1.17; r = 0.50, family adversities (d = 0.55; r = 0.26, and severity of psychiatric symptoms (d = 0.31; r = 0.15. Furthermore, families with stable high or improving collaboration maintained longer treatments and had a bigger chance of finishing the therapy as planned. The odds of having a stable low or deteriorating collaboration throughout treatment were significantly higher for subjects who started treatment with low treatment expectation or high family-related adversities.Conclusion: The positive outcomes of homebased interventions for multi-problem families are closely related to “stable high” and an “improving” collaboration as measured with the MPF-Collaboration Scale. Patients who fall into these groups have a high treatment outcome expectancy and reduce

  12. Multi-Scale Modelling of Deformation and Fracture in a Biomimetic Apatite-Protein Composite: Molecular-Scale Processes Lead to Resilience at the μm-Scale.

    Directory of Open Access Journals (Sweden)

    Dirk Zahn

    Full Text Available Fracture mechanisms of an enamel-like hydroxyapatite-collagen composite model are elaborated by means of molecular and coarse-grained dynamics simulation. Using fully atomistic models, we uncover molecular-scale plastic deformation and fracture processes initiated at the organic-inorganic interface. Furthermore, coarse-grained models are developed to investigate fracture patterns at the μm-scale. At the meso-scale, micro-fractures are shown to reduce local stress and thus prevent material failure after loading beyond the elastic limit. On the basis of our multi-scale simulation approach, we provide a molecular scale rationalization of this phenomenon, which seems key to the resilience of hierarchical biominerals, including teeth and bone.

  13. Artificial immune algorithm for multi-depot vehicle scheduling problems

    Science.gov (United States)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  14. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  15. Experiments on the Impact of language Problems in the Multi-cultural Operation of NPPs' Emergency Operation

    International Nuclear Information System (INIS)

    Kang, Seongkeun; Kim, Taehoon; Seong, Poong Hyun; Ha, Jun Su

    2016-01-01

    In 2010, The Korea Electric Power Corporation (KEPCO) was awarded a multi-billion dollar bid to construct the first nuclear power plant in Barakah, UAE. One must keep in mind however, that with technology transfer and international cooperation comes a host of potential problems arising from cultural differences such as language, everyday habitudes and workplace expectation. As of now, how problematic these potential issues may become is unknown. Of the aforementioned factors, communication is perhaps of foremost importance. We investigated UAE culture-related issues through analysis of operating experience reviews (OERs) and came to the conclusion that the language barrier needed utmost attention. Korean nuclear power plant operators will work in UAE and will operate the NPPs with operators and managers of other nationalities as well. The purpose of this paper is firstly to confirm that operators are put under mental stress, and secondly to demonstrate the decline in accuracy when they must work in English. Reducing human error is quite important to make nuclear power plants safer. As the mental workload of human operator is increased, the probability of a human error occurring also increases. It will have a negative influence on the plant’s safety. There are many factors which can potentially increase mental workload. We focused on communication problem which is a key factor of increasing mental workload because many Korean operators will work in UAE nuclear power plants and may work together with UAE operators. From these experiments we compared how performance of both Korean and UAE subjects were decreased when they use English. We designed experimental methods to be able to check this problem qualitatively and quantitatively. We analyzed four factors to find the communication problems from the experiments which are accuracy, efficiency, NASA-TLX, and brain wave. Accuracy, efficiency, brain wave are quantitative factors, and NASA-TLX is qualitative factor. To

  16. Philosophy of river problems: local to regional, static to mobile

    International Nuclear Information System (INIS)

    Jansky, L.

    1997-01-01

    According to the statistics, thirteen of the twenty-five major river basins in Europe are basins of transboundary rivers. The Danube river basin is largest transboundary river basin in Europe. Almost in each case the local and regional problems arise, like division of fishing rights (or rights on river beds), right to claim tolls on navigation, how to adjust boundaries if the channel moves, or rights to claim duty on crossing the river, or to build bridges, weirs, etc. All the above problems on a larger scale include also rights of non-contiguous lands (i.e. not fronting on the river) to use the river for navigation, for passage of migrating fish, to exploit river (e.g. bed sediments) without damage by one country or society to another below. Similarly, pollution and large-scale removal of water, are problems on regional or national levels. Disputes usually arise from the above, more or less exacerbated by their superimposition or other non-river problems, e.g. religion, politics, historical issues, recent aggression, relative prosperity, expanding economy vs. contrasting economy. May be cause or consequence of many of these. And somewhere here is likely the case of Gabcikovo on Danube between Slovakia and Hungary, as well. (author)

  17. Evaluation of scheduling problems for the project planning of large-scale projects using the example of nuclear facility dismantling; Evaluation von Schedulingproblemen fuer die Projektplanung von Grossprojekten am Beispiel des kerntechnischen Rueckbaus

    Energy Technology Data Exchange (ETDEWEB)

    Huebner, Felix; Schellenbaum, Uli; Stuerck, Christian; Gerhards, Patrick; Schultmann, Frank

    2017-05-15

    The magnitude of widespread nuclear decommissioning and dismantling, regarding deconstruction costs and project duration, exceeds even most of the prominent large-scale projects. The deconstruction costs of one reactor are estimated at several hundred million Euros and the dismantling period for more than a decade. The nuclear power plants built in the 1970s are coming closer to the end of their planned operating lifespan. Therefore, the decommissioning and dismantling of nuclear facilities, which is posing a multitude of challenges to planning and implementation, is becoming more and more relevant. This study describes planning methods for large-scale projects. The goal of this paper is to formulate a project planning problem that appropriately copes with the specific challenges of nuclear deconstruction projects. For this purpose, the requirements for appropriate scheduling methods are presented. Furthermore, a variety of possible scheduling problems are introduced and compared by their specifications and their behaviour. A set of particular scheduling problems including possible extensions and generalisations is assessed in detail. Based on the introduced problems and extensions, a Multi-mode Resource Investment Problem with Tardiness Penalty is chosen to fit the requirements of nuclear facility dismantling. This scheduling problem is then customised and adjusted according to the specific challenges of nuclear deconstruction projects. It can be called a Multi-mode Resource Investment Problem under the consideration of generalized precedence constraints and post-operational costs.

  18. Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets

    Directory of Open Access Journals (Sweden)

    Mostafa eHerajy

    2015-07-01

    Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.

  19. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    Science.gov (United States)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  20. New Resolution Strategy for Multi-scale Reaction Waves using Time Operator Splitting and Space Adaptive Multiresolution: Application to Human Ischemic Stroke*

    Directory of Open Access Journals (Sweden)

    Louvet Violaine

    2011-12-01

    Full Text Available We tackle the numerical simulation of reaction-diffusion equations modeling multi-scale reaction waves. This type of problems induces peculiar difficulties and potentially large stiffness which stem from the broad spectrum of temporal scales in the nonlinear chemical source term as well as from the presence of large spatial gradients in the reactive fronts, spatially very localized. A new resolution strategy was recently introduced ? that combines a performing time operator splitting with high oder dedicated time integration methods and space adaptive multiresolution. Based on recent theoretical studies of numerical analysis, such a strategy leads to a splitting time step which is not restricted neither by the fastest scales in the source term nor by stability limits related to the diffusion problem, but only by the physics of the phenomenon. In this paper, the efficiency of the method is evaluated through 2D and 3D numerical simulations of a human ischemic stroke model, conducted on a simplified brain geometry, for which a simple parallelization strategy for shared memory architectures was implemented, in order to reduce computing costs related to “detailed chemistry” features of the model.

  1. A multi-scale energy demand model suggests sharing market risks with intelligent energy cooperatives

    NARCIS (Netherlands)

    G. Methenitis (Georgios); M. Kaisers (Michael); J.A. La Poutré (Han)

    2015-01-01

    textabstractIn this paper, we propose a multi-scale model of energy demand that is consistent with observations at a macro scale, in our use-case standard load profiles for (residential) electric loads. We employ the model to study incentives to assume the risk of volatile market prices for

  2. EDDYMULT: a computing system for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1989-03-01

    A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)

  3. A fuzzy approach to the generation expansion planning problem in a multi-objective environment

    International Nuclear Information System (INIS)

    Abass, S. A.; Massoud, E. M. A.; Abass, S. A.)

    2007-01-01

    In many power system problems, the use of optimization techniques has proved inductive to reducing the costs and losses of the system. A fuzzy multi-objective decision is used for solving power system problems. One of the most important issues in the field of power system engineering is the generation expansion planning problem. In this paper, we use the concepts of membership functions to define a fuzzy decision model for generating an optimal solution for this problem. Solutions obtained by the fuzzy decision theory are always efficient and constitute the best compromise. (author)

  4. A natural solution to the μ-problem in supergravity theories

    International Nuclear Information System (INIS)

    Giudice, G.F.; Masiero, A.

    1988-01-01

    We propose a 'natural' way to avoid the introduction by hand of a small mass scale μ in the observable sector of N=1 supergravity theories. In our approach, μ automatically arises from the general couplings of broken supergravity. In this way, all low energy mass parameters arise only from supergravity breaking and, in particular, SU(2)xU(1) is left unbroken in the limit of exact supersymmetry. Our solution of the μ-problem presents interesting connections with the strong CP puzzle through the implementation of symmetries a la Peccei and Quinn. (orig.)

  5. Barriers and Facilitators for Health Behavior Change among Adults from Multi-Problem Households: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Gera E. Nagelhout

    2017-10-01

    Full Text Available Multi-problem households are households with problems on more than one of the following core problem areas: socio-economic problems, psycho-social problems, and problems related to child care. The aim of this study was to examine barriers and facilitators for health behavior change among adults from multi-problem households, as well as to identify ideas for a health promotion program. A qualitative study involving 25 semi-structured interviews was conducted among Dutch adults who received intensive family home care for multi-problem households. Results were discussed with eight social workers in a focus group interview. Data were analyzed using the Framework Method. The results revealed that the main reason for not engaging in sports were the costs. Physical activity was facilitated by physically active (transport to work and by dog ownership. Respondents who received a food bank package reported this as a barrier for healthy eating. Those with medical conditions such as diabetes indicated that this motivated them to eat healthily. Smokers and former smokers reported that stress was a major barrier for quitting smoking but that medical conditions could motivate them to quit smoking. A reported reason for not using alcohol was having difficult past experiences such as violence and abuse by alcoholics. Mentioned intervention ideas were: something social, an outdoor sports event, cooking classes, a walking group, and children’s activities in nature. Free or cheap activities that include social interaction and reduce stress are in line with the identified barriers and facilitators. Besides these activities, it may be important to influence the target group’s environment by educating social workers and ensuring healthier food bank packages.

  6. Multi-Scale Modelling of the Gamma Radiolysis of Nitrate Solutions

    OpenAIRE

    Horne, Gregory; Donoclift, Thomas; Sims, Howard E.; M. Orr, Robin; Pimblott, Simon

    2016-01-01

    A multi-scale modelling approach has been developed for the extended timescale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages; radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modelling. The first three components model...

  7. A population-based algorithm for the multi travelling salesman problem

    Directory of Open Access Journals (Sweden)

    Rubén Iván Bolaños

    2016-04-01

    Full Text Available This paper presents the implementation of an efficient modified genetic algorithm for solving the multi-traveling salesman problem (mTSP. The main characteristics of the method are the construction of an initial population of high quality and the implementation of several local search operators which are important in the efficient and effective exploration of promising regions of the solution space. Due to the combinatorial complexity of mTSP, the proposed methodology is especially applicable for real-world problems. The proposed algorithm was tested on a set of six benchmark instances, which have from 76 and 1002 cities to be visited. In all cases, the best known solution was improved. The results are also compared with other existing solutions procedure in the literature.

  8. A location-scale model for non-crossing expectile curves

    NARCIS (Netherlands)

    Schnabel, S.K.; Eilers, P.H.C.

    2013-01-01

    In quantile smoothing, crossing of the estimated curves is a common nuisance, in particular with small data sets and dense sets of quantiles. Similar problems arise in expectile smoothing. We propose a novel method to avoid crossings. It is based on a location-scale model for expectiles and

  9. A neural-network approach to the problem of photon-pair combinatorics

    International Nuclear Information System (INIS)

    Awes, T.C.

    1990-06-01

    A recursive neural-network algorithm is applied to the problem of correctly pairing photons from π 0 , η, and higher resonance decays in the presence of a large background of photons resulting from many simultaneous decays. The method uses the full information of the multi-photon final state to suppress the selection of false photon pairs which arise from the many combinatorial possibilities. The method is demonstrated for simulated photon events under semirealistic experimental conditions. 3 refs., 3 figs

  10. Probabilistic Simulation of Multi-Scale Composite Behavior

    Science.gov (United States)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  11. Anisotropic multi-scale fluid registration: evaluation in magnetic resonance breast imaging

    International Nuclear Information System (INIS)

    Crum, W R; Tanner, C; Hawkes, D J

    2005-01-01

    Registration using models of compressible viscous fluids has not found the general application of some other techniques (e.g., free-form-deformation (FFD)) despite its ability to model large diffeomorphic deformations. We report on a multi-resolution fluid registration algorithm which improves on previous work by (a) directly solving the Navier-Stokes equation at the resolution of the images (b) accommodating image sampling anisotropy using semi-coarsening and implicit smoothing in a full multi-grid (FMG) solver and (c) exploiting the inherent multi-resolution nature of FMG to implement a multi-scale approach. Evaluation is on five magnetic resonance (MR) breast images subject to six biomechanical deformation fields over 11 multi-resolution schemes. Quantitative assessment is by tissue overlaps and target registration errors and by registering using the known correspondences rather than image features to validate the fluid model. Context is given by comparison with a validated FFD algorithm and by application to images of volunteers subjected to large applied deformation. The results show that fluid registration of 3D breast MR images to sub-voxel accuracy is possible in minutes on a 1.6 GHz Linux-based Athlon processor with coarse solutions obtainable in a few tens of seconds. Accuracy and computation time are comparable to FFD techniques validated for this application

  12. Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Hou Jiang

    2018-06-01

    Full Text Available Haze removal is a pre-processing step that operates on at-sensor radiance data prior to the physically based image correction step to enhance hazy imagery visually. Most current haze removal methods focus on point-to-point operations and utilize information in the spectral domain, without taking consideration of the multi-scale spatial information of haze. In this paper, we propose a multi-scale residual convolutional neural network (MRCNN for haze removal of remote sensing images. MRCNN utilizes 3D convolutional kernels to extract spatial–spectral correlation information and abstract features from surrounding neighborhoods for haze transmission estimation. It takes advantage of dilated convolution to aggregate multi-scale contextual information for the purpose of improving its prediction accuracy. Meanwhile, residual learning is utilized to avoid the loss of weak information while deepening the network. Our experiments indicate that MRCNN performs accurately, achieving an extremely low validation error and testing error. The haze removal results of several scenes of Landsat 8 Operational Land Imager (OLI data show that the visibility of the dehazed images is significantly improved, and the color of recovered surface is consistent with the actual scene. Quantitative analysis proves that the dehazed results of MRCNN are superior to the traditional methods and other networks. Additionally, a comparison to haze-free data illustrates the spectral consistency after haze removal and reveals the changes in the vegetation index.

  13. New multi-objective decision support methodology to solve problems of reconfiguration in the electric distribution systems

    NARCIS (Netherlands)

    Santos, S.F.; Paterakis, N.G.; Catalao, J.P.S.; Camarinha-Matos, L.M.; Baldissera, T.A.; Di Orio, G.; Marques, F.

    2015-01-01

    The distribution systems (DS) reconfiguration problem is formulated in this paper as a multi-objective mixed-integer linear programming (MILP) multiperiod problem, enforcing that the obtained topology is radial in order to exploit several advantages those configurations offer. The effects of

  14. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  15. Multi-element neutron activation analysis and solution of classification problems using multidimensional statistics

    International Nuclear Information System (INIS)

    Vaganov, P.A.; Kol'tsov, A.A.; Kulikov, V.D.; Mejer, V.A.

    1983-01-01

    The multi-element instrumental neutron activation analysis of samples of mountain rocks (sandstones, aleurolites and shales of one of gold deposits) is performed. The spectra of irradiated samples are measured by Ge(Li) detector of the volume of 35 mm 3 . The content of 22 chemical elements is determined in each sample. The results of analysis serve as reliable basis for multi-dimensional statistic information processing, they constitute the basis for the generalized characteristics of rocks which brings about the solution of classification problem for rocks of different deposits

  16. Dynamics of the middle atmosphere as observed by the ARISE project

    Science.gov (United States)

    Blanc, E.

    2015-12-01

    It has been strongly demonstrated that variations in the circulation of the middle atmosphere influence weather and climate all the way to the Earth's surface. A key part of this coupling occurs through the propagation and breaking of planetary and gravity waves. However, limited observations prevent to faithfully reproduce the dynamics of the middle atmosphere in numerical weather prediction and climate models. The main challenge of the ARISE (Atmospheric dynamics InfraStructure in Europe) project is to combine existing national and international observation networks including: the International infrasound monitoring system developed for the CTBT (Comprehensive nuclear-Test-Ban Treaty) verification, the NDACC (Network for the Detection of Atmospheric Composition Changes) lidar network, European observation infrastructures at mid latitudes (OHP observatory), tropics (Maïdo observatory), high latitudes (ALOMAR and EISCAT), infrasound stations which form a dense European network and satellites. The ARISE network is unique by its coverage (polar to equatorial regions in the European longitude sector), its altitude range (from troposphere to mesosphere and ionosphere) and the involved scales both in time (from seconds to tens of years) and space (from tens of meters to thousands of kilometers). Advanced data products are produced with the scope to assimilate data in the Weather Prediction models to improve future forecasts over weeks and seasonal time scales. ARISE observations are especially relevant for the monitoring of extreme events such as thunderstorms, volcanoes, meteors and at larger scales, deep convection and stratospheric warming events for physical processes description and study of long term evolution with climate change. Among the applications, ARISE fosters integration of innovative methods for remote detection of non-instrumented volcanoes including distant eruption characterization to provide notifications with reliable confidence indices to the

  17. IMPROVEMENT AND EXTENSION OF SHAPE EVALUATION CRITERIA IN MULTI-SCALE IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    M. Sakamoto

    2016-06-01

    Full Text Available From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region’s shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape’s diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape’s reproducibility.

  18. Multi-Branch Fully Convolutional Network for Face Detection

    KAUST Repository

    Bai, Yancheng; Ghanem, Bernard

    2017-01-01

    Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully

  19. Change in Urban Albedo in London: A Multi-scale Perspective

    Science.gov (United States)

    Susca, T.; Kotthaus, S.; Grimmond, S.

    2013-12-01

    Urbanization-induced change in land use has considerable implications for climate, air quality, resources and ecosystems. Urban-induced warming is one of the most well-known impacts. This directly and indirectly can extend beyond the city. One way to reduce the size of this is to modify the surface atmosphere exchanges through changing the urban albedo. As increased rugosity caused by the morphology of a city results in lower albedo with constant material characteristics, the impacts of changing the albedo has impacts across a range of scales. Here a multi-scale assessment of the potential effects of the increase in albedo in London is presented. This includes modeling at the global and meso-scale informed by local and micro-scale measurements. In this study the first order calculations are conducted for the impact of changing the albedo (e.g. a 0.01 increase) on the radiative exchange. For example, when incoming solar radiation and cloud cover are considered, based on data retrieved from NASA (http://power.larc.nasa.gov/) for ~1600 km2 area of London, would produce a mean decrease in the instantaneous solar radiative forcing on the same surface of 0.40 W m-2. The nature of the surface is critical in terms of considering the impact of changes in albedo. For example, in the Central Activity Zone in London pavement and building can vary from 10 to 100% of the plan area. From observations the albedo is seen to change dramatically with changes in building materials. For example, glass surfaces which are being used increasingly in the central business district results in dramatic changes in albedo. Using the documented albedo variations determined across different scales the impacts are considered. For example, the effect of the increase in urban albedo is translated into the corresponding amount of avoided emission of carbon dioxide that produces the same effect on climate. At local scale, the effect that the increase in urban albedo can potentially have on local

  20. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  1. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    Science.gov (United States)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  2. COLLABORATIVE MULTI-SCALE 3D CITY AND INFRASTRUCTURE MODELING AND SIMULATION

    Directory of Open Access Journals (Sweden)

    M. Breunig

    2017-09-01

    Full Text Available Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  3. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    Science.gov (United States)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings

  4. Simulation of left atrial function using a multi-scale model of the cardiovascular system.

    Directory of Open Access Journals (Sweden)

    Antoine Pironet

    Full Text Available During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors.

  5. SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.

    Science.gov (United States)

    Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei

    2018-05-03

    Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.

  6. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    Science.gov (United States)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  7. Multi-scale Modeling of Power Plant Plume Emissions and Comparisons with Observations

    Science.gov (United States)

    Costigan, K. R.; Lee, S.; Reisner, J.; Dubey, M. K.; Love, S. P.; Henderson, B. G.; Chylek, P.

    2011-12-01

    The Remote Sensing Verification Project (RSVP) test-bed located in the Four Corners region of Arizona, Utah, Colorado, and New Mexico offers a unique opportunity to develop new approaches for estimating emissions of CO2. Two major power plants located in this area produce very large signals of co-emitted CO2 and NO2 in this rural region. In addition to the Environmental Protection Agency (EPA) maintaining Continuous Emissions Monitoring Systems (CEMS) on each of the power plant stacks, the RSVP program has deployed an array of in-situ and remote sensing instruments, which provide both point and integrated measurements. To aid in the synthesis and interpretation of the measurements, a multi-scale atmospheric modeling approach is implemented, using two atmospheric numerical models: the Weather Research and Forecasting Model with chemistry (WRF-Chem; Grell et al., 2005) and the HIGRAD model (Reisner et al., 2003). The high fidelity HIGRAD model incorporates a multi-phase Lagrangian particle based approach to track individual chemical species of stack plumes at ultra-high resolution, using an adaptive mesh. It is particularly suited to model buoyancy effects and entrainment processes at the edges of the power plant plumes. WRF-Chem is a community model that has been applied to a number of air quality problems and offers several physical and chemical schemes that can be used to model the transport and chemical transformation of the anthropogenic plumes out of the local region. Multiple nested grids employed in this study allow the model to incorporate atmospheric variability ranging from synoptic scales to micro-scales (~200 m), while including locally developed flows influenced by the nearby complex terrain of the San Juan Mountains. The simulated local atmospheric dynamics are provided to force the HIGRAD model, which links mesoscale atmospheric variability to the small-scale simulation of the power plant plumes. We will discuss how these two models are applied and

  8. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  9. Multi-scale damage modelling in a ceramic matrix composite using a finite-element microstructure meshfree methodology

    Science.gov (United States)

    2016-01-01

    The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308

  10. Global forward-predicting dynamic routing for traffic concurrency space stereo multi-layer scale-free network

    International Nuclear Information System (INIS)

    Xie Wei-Hao; Zhou Bin; Liu En-Xiao; Lu Wei-Dang; Zhou Ting

    2015-01-01

    Many real communication networks, such as oceanic monitoring network and land environment observation network, can be described as space stereo multi-layer structure, and the traffic in these networks is concurrent. Understanding how traffic dynamics depend on these real communication networks and finding an effective routing strategy that can fit the circumstance of traffic concurrency and enhance the network performance are necessary. In this light, we propose a traffic model for space stereo multi-layer complex network and introduce two kinds of global forward-predicting dynamic routing strategies, global forward-predicting hybrid minimum queue (HMQ) routing strategy and global forward-predicting hybrid minimum degree and queue (HMDQ) routing strategy, for traffic concurrency space stereo multi-layer scale-free networks. By applying forward-predicting strategy, the proposed routing strategies achieve better performances in traffic concurrency space stereo multi-layer scale-free networks. Compared with the efficient routing strategy and global dynamic routing strategy, HMDQ and HMQ routing strategies can optimize the traffic distribution, alleviate the number of congested packets effectively and reach much higher network capacity. (paper)

  11. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  12. Multi-Spacecraft Study of Kinetic scale Turbulence Using MMS Observations in the Solar Wind

    Science.gov (United States)

    Chasapis, A.; Matthaeus, W. H.; Parashar, T.; Fuselier, S. A.; Maruca, B.; Burch, J.; Moore, T. E.; Phan, T.; Pollock, C. J.; Gershman, D. J.; Torbert, R. B.; Russell, C. T.; Strangeway, R. J.

    2017-12-01

    We present a study investigating kinetic scale turbulence in the solar wind. Most previous studies relied on single spacecraft measurements, employing the Taylor hypothesis in order to probe different scales. The small separation of MMS spacecraft, well below the ion inertial scale, allow us for the first time to directly probe turbulent fluctuations at the kinetic range. Using multi-spacecraft measurements, we are able to measure the spatial characteristics of turbulent fluctuations and compare with the traditional Taylor-based single spacecraft approach. Meanwhile, combining observations from Cluster and MMS data we were able to cover a wide range of scales from the inertial range where the turbulent cascade takes place, down to the kinetic range where the energy is eventually dissipated. These observations present an important step in understanding the nature of solar wind turbulence and the processes through which turbulent energy is dissipated into particle heating and acceleration. We compute statistical quantities such as the second order structure function and the scale-dependent kurtosis, along with their dependence on the parameters such as the mean magnetic field direction. Overall, we observe an overall agreement between the single spacecraft and the multi-spacecraft approach. However, a small but significant deviation is observed at the smaller scales near the electron inertial scale. The high values of the scale dependent kurtosis at very small scales, observed via two-point measurements, open up a compelling avenue of investigation for theory and numerical modelling.

  13. Transport synthetic acceleration scheme for multi-dimensional neutron transport problems

    Energy Technology Data Exchange (ETDEWEB)

    Modak, R S; Kumar, Vinod; Menon, S V.G. [Theoretical Physics Div., Bhabha Atomic Research Centre, Mumbai (India); Gupta, Anurag [Reactor Physics Design Div., Bhabha Atomic Research Centre, Mumbai (India)

    2005-09-15

    The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)

  14. Transport synthetic acceleration scheme for multi-dimensional neutron transport problems

    International Nuclear Information System (INIS)

    Modak, R.S.; Vinod Kumar; Menon, S.V.G.; Gupta, Anurag

    2005-09-01

    The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)

  15. Scaling Professional Problems of Teachers in Turkey with Paired Comparison Method

    Directory of Open Access Journals (Sweden)

    Yasemin Duygu ESEN

    2017-03-01

    Full Text Available In this study, teachers’ professional problems was investigated and the significance level of them was measured with the paired comparison method. The study was carried out in survey model. The study group consisted of 484 teachers working in public schools which are accredited by Ministry of National Education (MEB in Turkey. “The Teacher Professional Problems Survey” developed by the researchers was used as a data collection tool. In data analysis , the scaling method with the third conditional equation of Thurstone’s law of comparative judgement was used. According to the results of study, the teachers’ professional problems include teacher training and the quality of teacher, employee rights and financial problems, decrease of professional reputation, the problems with MEB policies, the problems with union activities, workload, the problems with administration in school, physical conditions and the lack of infrastructure, the problems with parents, the problems with students. According to teachers, the most significant problem is MEB educational policies. This is followed by decrease of professional reputation, physical conditions and the lack of infrastructure, the problems with students, employee rights and financial problems, the problems with administration in school, teacher training and the quality of teacher, the problems with parents, workload, and the problems with union activities. When teachers’ professional problems were analyzed seniority variable, there was little difference in scale values. While the teachers with 0-10 years experience consider decrease of professional reputation as the most important problem, the teachers with 11-45 years experience put the problems with MEB policies at the first place.

  16. A multi-scale and multi-field coupling nonlinear constitutive theory for the layered magnetoelectric composites

    Science.gov (United States)

    Xu, Hao; Pei, Yongmao; Li, Faxin; Fang, Daining

    2018-05-01

    The magnetic, electric and mechanical behaviors are strongly coupled in magnetoelectric (ME) materials, making them great promising in the application of functional devices. In this paper, the magneto-electro-mechanical fully coupled constitutive behaviors of ME laminates are systematically studied both theoretically and experimentally. A new probabilistic domain switching function considering the surface ferromagnetic anisotropy and the interface charge-mediated effect is proposed. Then a multi-scale multi-field coupling nonlinear constitutive model for layered ME composites is developed with physical measureable parameters. The experiments were performed to compare the theoretical predictions with the experimental data. The theoretical predictions have a good agreement with experimental results. The proposed constitutive relation can be used to describe the nonlinear multi-field coupling properties of both ME laminates and thin films. Several novel coupling experimental phenomena such as the electric-field control of magnetization, and the magnetic-field tuning of polarization are observed and analyzed. Furthermore, the size-effect of the electric tuning behavior of magnetization is predicted, which demonstrates a competition mechanism between the interface strain-mediated effect and the charge-driven effect. Our study offers deep insight into the coupling microscopic mechanism and macroscopic properties of ME layered composites, which is benefit for the design of electromagnetic functional devices.

  17. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  18. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  19. PSI-BOIL, a building block towards the multi-scale modeling of flow boiling phenomena

    International Nuclear Information System (INIS)

    Niceno, Bojan; Andreani, Michele; Prasser, Horst-Michael

    2008-01-01

    Full text of publication follows: In these work we report the current status of the Swiss project Multi-scale Modeling Analysis (MSMA), jointly financed by PSI and Swissnuclear. The project aims at addressing the multi-scale (down to nano-scale) modelling of convective boiling phenomena, and the development of physically-based closure laws for the physical scales appropriate to the problem considered, to be used within Computational Fluid Dynamics (CFD) codes. The final goal is to construct a new computational tool, called Parallel Simulator of Boiling phenomena (PSI-BOIL) for the direct simulation of processes all the way down to the small-scales of interest and an improved CFD code for the mechanistic prediction of two-phase flow and heat transfer in the fuel rod bundle of a nuclear reactor. An improved understanding of the physics of boiling will be gained from the theoretical work as well as from novel small- and medium scale experiments targeted to assist the development of closure laws. PSI-BOIL is a computer program designed for efficient simulation of turbulent fluid flow and heat transfer phenomena in simple geometries. Turbulence is simulated directly (DNS) and its efficiency plays a vital role in a successful simulation. Having high performance as one of the main prerequisites, PSIBOIL is tailored in such a way to be as efficient a tool as possible, relying on well-established numerical techniques and sacrificing all the features which are not essential for the success of this project and which might slow down the solution procedure. The governing equations are discretized in space with orthogonal staggered finite volume method. Time discretization is performed with projection method, the most obvious a the most widely used choice for DNS. Systems of linearized equation, stemming from the discretization of governing equations, are solved with the Additive Correction Multigrid (ACM). methods. Two distinguished features of PSI-BOIL are the possibility to

  20. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)