Multiple Scale Reaction-Diffusion-Advection Problems with Moving Fronts
Nefedov, Nikolay
2016-06-01
In this work we discuss the further development of the general scheme of the asymptotic method of differential inequalities to investigate stability and motion of sharp internal layers (fronts) for nonlinear singularly perturbed parabolic equations, which are called in applications reaction-diffusion-advection equations. Our approach is illustrated for some new important cases of initial boundary value problems. We present results on stability and on the motion of the fronts.
Regularization methods for ill-posed problems in multiple Hilbert scales
International Nuclear Information System (INIS)
Mazzieri, Gisela L; Spies, Ruben D
2012-01-01
Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)
Accurate scaling on multiplicity
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs
Kuehn, Christian
2015-01-01
This book provides an introduction to dynamical systems with multiple time scales. The approach it takes is to provide an overview of key areas, particularly topics that are less available in the introductory form. The broad range of topics included makes it accessible for students and researchers new to the field to gain a quick and thorough overview. The first of its kind, this book merges a wide variety of different mathematical techniques into a more unified framework. The book is highly illustrated with many examples and exercises and an extensive bibliography. The target audience of this book are senior undergraduates, graduate students as well as researchers interested in using the multiple time scale dynamics theory in nonlinear science, either from a theoretical or a mathematical modeling perspective.
Directory of Open Access Journals (Sweden)
Shihuang Hong
2009-01-01
Full Text Available We present sufficient conditions for the existence of at least twin or triple positive solutions of a nonlinear four-point singular boundary value problem with a p-Laplacian dynamic equation on a time scale. Our results are obtained via some new multiple fixed point theorems.
MULTIPLE SCALES FOR SUSTAINABLE RESULTS
This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...
International Nuclear Information System (INIS)
Harrison, L.
1991-01-01
Small scale wind energy conversion is finding it even more difficult to realise its huge potential market than grid connected wind power. One of the main reasons for this is that its technical development is carried out in isolated parts of the world with little opportunity for technology transfer: small scale wind energy converters (SWECS) are not born of one technology, but have been evolved for different purposes; as a result, the SWECS community has no powerful lobbying force speaking with one voice to promote the technology. There are three distinct areas of application for SWECS, water pumping for domestic and livestock water supplies, irrigation, drainage etc., where no other mechanical means of power is available or viable, battery charging for lighting, TV, radio, and telecommunications in areas far from a grid or road system, and wind-diesel systems, mainly for use on islands where supply of diesel oil is possible, but costly. An attempt is being made to found an association to support the widespread implementation of SWECS and to promote their implementation. It is intended for Wind Energy for Rural Areas to have a permanent secretariat, based in Holland. (AB)
Complex multiplication and lifting problems
Chai, Ching-Li; Oort, Frans
2013-01-01
Abelian varieties with complex multiplication lie at the origins of class field theory, and they play a central role in the contemporary theory of Shimura varieties. They are special in characteristic 0 and ubiquitous over finite fields. This book explores the relationship between such abelian varieties over finite fields and over arithmetically interesting fields of characteristic 0 via the study of several natural CM lifting problems which had previously been solved only in special cases. In addition to giving complete solutions to such questions, the authors provide numerous examples to illustrate the general theory and present a detailed treatment of many fundamental results and concepts in the arithmetic of abelian varieties, such as the Main Theorem of Complex Multiplication and its generalizations, the finer aspects of Tate's work on abelian varieties over finite fields, and deformation theory. This book provides an ideal illustration of how modern techniques in arithmetic geometry (such as descent the...
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
The problem of multiple carcinomas
International Nuclear Information System (INIS)
Kegel, W.; Schmieder, A.
1982-01-01
This retrospective study reports on the occurrence of multiple carcinomas among the patients of our Department of Radiotherapy. Examination of 1290 patients during 1978 to 1980 showed in 76 cases (5.8%) simultaneously or successively secondary or tertiary tumours. These multiple tumours were most frequent in the mammary gland, in the female genital organs and in the respiratory system. Women had an incidence which was double of that displayed by men. Diagnosis and therapy of malignant tumours must always consider the possibility of multiplicity of carcinomas, either simultaneously or succesively, appearing spontaneously or as a result of iatrogenic influences. This applies in particular to the multicentric and bilateral occurrence of the early types of cancer of the female breast. (orig.) [de
DEFF Research Database (Denmark)
Frankel, Christian
2015-01-01
Only few studies in the field of new new economic sociology deal with a simultaneity of multiple markets in the analysis. One central explanation of this situation is limitations inherent in the new new economic sociology. In this review essay I address such limitations as a way to develop research...
THE MULTIPLE CHOICE PROBLEM WITH INTERACTIONS BETWEEN CRITERIA
Directory of Open Access Journals (Sweden)
Luiz Flavio Autran Monteiro Gomes
2015-12-01
Full Text Available ABSTRACT An important problem in Multi-Criteria Decision Analysis arises when one must select at least two alternatives at the same time. This can be denoted as a multiple choice problem. In other words, instead of evaluating each of the alternatives separately, they must be combined into groups of n alternatives, where n = 2. When the multiple choice problem must be solved under multiple criteria, the result is a multi-criteria, multiple choice problem. In this paper, it is shown through examples how this problemcan be tackled on a bipolar scale. The Choquet integral is used in this paper to take care of interactions between criteria. A numerical application example is conducted using data from SEBRAE-RJ, a non-profit private organization that has the mission of promoting competitiveness, sustainable developmentand entrepreneurship in the state of Rio de Janeiro, Brazil. The paper closes with suggestions for future research.
Beyond KNO multiplicative cascades and novel multiplicity scaling laws
Hegyi, S
1999-01-01
The collapse of multiplicity distributions P/sub n/ onto a universal scaling curve arises when P/sub n/ is expressed as a function of the standardized multiplicity (n-c)/ lambda with c and lambda being location and scale parameters governed by leading particle effects and the growth of average multiplicity. It is demonstrated that self- similar multiplicative cascade processes such as QCD parton branching naturally lead to a novel type of scaling behavior of P/sub n/ which manifests itself in Mellin space through a location change controlled by the degree of multifractality and a scale change governed by the depth of the cascade. Applying the new scaling rule it is shown how to restore data collapsing behavior of P/sub n/ measured in hh collisions at ISR and SPS energies. (21 refs).
Resolvent-Techniques for Multiple Exercise Problems
International Nuclear Information System (INIS)
Christensen, Sören; Lempa, Jukka
2015-01-01
We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristics of the problems can be identified more explicitly. We illustrate the main results with explicit examples
Resolvent-Techniques for Multiple Exercise Problems
Energy Technology Data Exchange (ETDEWEB)
Christensen, Sören, E-mail: christensen@math.uni-kiel.de [Christian–Albrechts-University in Kiel, Mathematical Institute (Germany); Lempa, Jukka, E-mail: jukka.lempa@hioa.no [Oslo and Akershus University College, School of business, Faculty of Social Sciences (Norway)
2015-02-15
We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristics of the problems can be identified more explicitly. We illustrate the main results with explicit examples.
Genetic Algorithms for Multiple-Choice Problems
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
[Supporting parenting in families with multiple problems].
Le Foll, Julie
2015-01-01
Supporting parenthood in families with multiple problems is a major early prevention challenge. Indeed, the factors of vulnerability, especially if they mount up, expose the child to an increased risk of a somatic pathology, developmental delays, learning difficulties and maltreatment. In order to limit the impact of these vulnerabilities on the health of mothers and infants, it is essential to act early, to adapt the working framework and to collaborate within a network. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
International Nuclear Information System (INIS)
Stopford, J.
2001-01-01
The Social Impact of the closure of ChNPP is indeed a significant and complex one. There is no one simple solution to the problem but if the effects of closure are to be mitigated effectively all who are involved, be they part of the local community, donor agencies or project staff need to work together towards the common goal, putting aside personal agendas and ensuring that every resource, financial and human, is used in a productive and constructive way complementing the activities of others and not competing with them
Accurate multiplicity scaling in isotopically conjugate reactions
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Some Problems of Industrial Scale-Up.
Jackson, A. T.
1985-01-01
Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…
Scaling of Attitudes Toward Population Problems
Watkins, George A.
1975-01-01
This study related population problem attitudes and socioeconomic variables. Six items concerned with number of children, birth control, family, science, economic depression, and overpopulation were selected for a Guttman scalogram. Education, occupation, and number of children were correlated with population problems scale scores; marital status,…
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Estimating scaled treatment effects with multiple outcomes.
Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita
2017-01-01
In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.
Nonlinear triple-point problems on time scales
Directory of Open Access Journals (Sweden)
Douglas R. Anderson
2004-04-01
Full Text Available We establish the existence of multiple positive solutions to the nonlinear second-order triple-point boundary-value problem on time scales, $$displaylines{ u^{Delta abla}(t+h(tf(t,u(t=0, cr u(a=alpha u(b+delta u^Delta(a,quad eta u(c+gamma u^Delta(c=0 }$$ for $tin[a,c]subsetmathbb{T}$, where $mathbb{T}$ is a time scale, $eta, gamma, deltage 0$ with $Beta+gamma>0$, $0
Stabilization Algorithms for Large-Scale Problems
DEFF Research Database (Denmark)
Jensen, Toke Koldborg
2006-01-01
The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...
The Multiple-Minima Problem in Protein Folding
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Multiple time scale methods in tokamak magnetohydrodynamics
International Nuclear Information System (INIS)
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
The Multiple Pendulum Problem via Maple[R
Salisbury, K. L.; Knight, D. G.
2002-01-01
The way in which computer algebra systems, such as Maple, have made the study of physical problems of some considerable complexity accessible to mathematicians and scientists with modest computational skills is illustrated by solving the multiple pendulum problem. A solution is obtained for four pendulums with no restriction on the size of the…
On multiple level-set regularization methods for inverse problems
International Nuclear Information System (INIS)
DeCezaro, A; Leitão, A; Tai, X-C
2009-01-01
We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels
Problem-Solving: Scaling the "Brick Wall"
Benson, Dave
2011-01-01
Across the primary and secondary phases, pupils are encouraged to use and apply their knowledge, skills, and understanding of mathematics to solve problems in a variety of forms, ranging from single-stage word problems to the challenge of extended rich tasks. Amongst many others, Cockcroft (1982) emphasised the importance and relevance of…
A Hybrid Genetic Algorithm for the Multiple Crossdocks Problem
Directory of Open Access Journals (Sweden)
Zhaowei Miao
2012-01-01
Full Text Available We study a multiple crossdocks problem with supplier and customer time windows, where any violation of time windows will incur a penalty cost and the flows through the crossdock are constrained by fixed transportation schedules and crossdock capacities. We prove this problem to be NP-hard in the strong sense and therefore focus on developing efficient heuristics. Based on the problem structure, we propose a hybrid genetic algorithm (HGA integrating greedy technique and variable neighborhood search method to solve the problem. Extensive experiments under different scenarios were conducted, and results show that HGA outperforms CPLEX solver, providing solutions in realistic timescales.
A Multiple-Scale Analysis of Evaporation Induced Marangoni Convection
Hennessy, Matthew G.
2013-04-23
This paper considers the stability of thin liquid layers of binary mixtures of a volatile (solvent) species and a nonvolatile (polymer) species. Evaporation leads to a depletion of the solvent near the liquid surface. If surface tension increases for lower solvent concentrations, sufficiently strong compositional gradients can lead to Bénard-Marangoni-type convection that is similar to the kind which is observed in films that are heated from below. The onset of the instability is investigated by a linear stability analysis. Due to evaporation, the base state is time dependent, thus leading to a nonautonomous linearized system which impedes the use of normal modes. However, the time scale for the solvent loss due to evaporation is typically long compared to the diffusive time scale, so a systematic multiple scales expansion can be sought for a finite-dimensional approximation of the linearized problem. This is determined to leading and to next order. The corrections indicate that the validity of the expansion does not depend on the magnitude of the individual eigenvalues of the linear operator, but it requires these eigenvalues to be well separated. The approximations are applied to analyze experiments by Bassou and Rharbi with polystyrene/toluene mixtures [Langmuir, 25 (2009), pp. 624-632]. © 2013 Society for Industrial and Applied Mathematics.
A Multiple-Scale Analysis of Evaporation Induced Marangoni Convection
Hennessy, Matthew G.; Mü nch, Andreas
2013-01-01
This paper considers the stability of thin liquid layers of binary mixtures of a volatile (solvent) species and a nonvolatile (polymer) species. Evaporation leads to a depletion of the solvent near the liquid surface. If surface tension increases for lower solvent concentrations, sufficiently strong compositional gradients can lead to Bénard-Marangoni-type convection that is similar to the kind which is observed in films that are heated from below. The onset of the instability is investigated by a linear stability analysis. Due to evaporation, the base state is time dependent, thus leading to a nonautonomous linearized system which impedes the use of normal modes. However, the time scale for the solvent loss due to evaporation is typically long compared to the diffusive time scale, so a systematic multiple scales expansion can be sought for a finite-dimensional approximation of the linearized problem. This is determined to leading and to next order. The corrections indicate that the validity of the expansion does not depend on the magnitude of the individual eigenvalues of the linear operator, but it requires these eigenvalues to be well separated. The approximations are applied to analyze experiments by Bassou and Rharbi with polystyrene/toluene mixtures [Langmuir, 25 (2009), pp. 624-632]. © 2013 Society for Industrial and Applied Mathematics.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Multiple scattering problems in heavy ion elastic recoil detection analysis
International Nuclear Information System (INIS)
Johnston, P.N.; El Bouanani, M.; Stannard, W.B.; Bubb, I.F.; Cohen, D.D.; Dytlewski, N.; Siegele, R.
1998-01-01
A number of groups use Heavy Ion Elastic Recoil Detection Analysis (HIERDA) to study materials science problems. Nevertheless, there is no standard methodology for the analysis of HIERDA spectra. To overcome this deficiency we have been establishing codes for 2-dimensional data analysis. A major problem involves the effects of multiple and plural scattering which are very significant, even for quite thin (∼100 nm) layers of the very heavy elements. To examine the effects of multiple scattering we have made comparisons between the small-angle model of Sigmund et al. and TRIM calculations. (authors)
Heuristic for Solving the Multiple Alignment Sequence Problem
Directory of Open Access Journals (Sweden)
Roman Anselmo Mora Gutiérrez
2011-03-01
Full Text Available In this paper we developed a new algorithm for solving the problem of multiple sequence alignment (AM S, which is a hybrid metaheuristic based on harmony search and simulated annealing. The hybrid was validated with the methodology of Julie Thompson. This is a basic algorithm and and results obtained during this stage are encouraging.
Multiple solutions for inhomogeneous nonlinear elliptic problems arising in astrophyiscs
Directory of Open Access Journals (Sweden)
Marco Calahorrano
2004-04-01
Full Text Available Using variational methods we prove the existence and multiplicity of solutions for some nonlinear inhomogeneous elliptic problems on a bounded domain in $mathbb{R}^n$, with $ngeq 2$ and a smooth boundary, and when the domain is $mathbb{R}_+^n$
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
De Corte, E.; And Others
One important finding from recent research on multiplication word problems is that children's performances are strongly affected by the nature of the multiplier (whether it is an integer, decimal larger than 1 or a decimal smaller than 1). On the other hand, the size of the multiplicand has little or no effect on problem difficulty. The aim of the…
Receptivity to Kinetic Fluctuations: A Multiple Scales Approach
Edwards, Luke; Tumin, Anatoli
2017-11-01
The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.
K-State Problem Identification Rating Scales for College Students
Robertson, John M.; Benton, Stephen L.; Newton, Fred B.; Downey, Ronald G.; Marsh, Patricia A.; Benton, Sheryl A.; Tseng, Wen-Chih; Shin, Kang-Hyun
2006-01-01
The K-State Problem Identification Rating Scales, a new screening instrument for college counseling centers, gathers information about clients' presenting symptoms, functioning levels, and readiness to change. Three studies revealed 7 scales: Mood Difficulties, Learning Problems, Food Concerns, Interpersonal Conflicts, Career Uncertainties,…
Problem-solving with multiple interdependent criteria: better solution to complex problems
International Nuclear Information System (INIS)
Carlsson, C.; Fuller, R.
1996-01-01
We consider multiple objective programming (MOP) problems with additive interdependencies, this is when the states of some chosen objective are attained through supportive or inhibitory feed-backs from several other objectives. MOP problems with independent objectives (when the cause-effect relations between the decision variables and the objectives are completely known) will be treated as special cases of the MOP in which we have interdependent objectives. We illustrate our ideas by a simple three-objective real-life problem
Multiple scaling power in liquid gallium under pressure conditions
Energy Technology Data Exchange (ETDEWEB)
Li, Renfeng; Wang, Luhong; Li, Liangliang; Yu, Tony; Zhao, Haiyan; Chapman, Karena W.; Rivers, Mark L.; Chupas, Peter J.; Mao, Ho-kwang; Liu, Haozhe
2017-06-01
Generally, a single scaling exponent, Df, can characterize the fractal structures of metallic glasses according to the scaling power law. However, when the scaling power law is applied to liquid gallium upon compression, the results show multiple scaling exponents and the values are beyond 3 within the first four coordination spheres in real space, indicating that the power law fails to describe the fractal feature in liquid gallium. The increase in the first coordination number with pressure leads to the fact that first coordination spheres at different pressures are not similar to each other in a geometrical sense. This multiple scaling power behavior is confined within a correlation length of ξ ≈ 14–15 Å at applied pressure according to decay of G(r) in liquid gallium. Beyond this length the liquid gallium system could roughly be viewed as homogeneous, as indicated by the scaling exponent, Ds, which is close to 3 beyond the first four coordination spheres.
Scaling and mean normalized multiplicity in hadron-nucleus collisions
International Nuclear Information System (INIS)
Khan, M.Q.R.; Ahmad, M.S.; Hasan, R.
1987-01-01
Recently it has been reported that the dependence of the mean normalized multiplicity, R A , in hadron-nucleus collisions upon the effective number of projectile encounters, , is projectile independent. We report the failure of this kind of scaling using the world data at accelerator and cosmic ray energies. Infact, we have found that the dependence of R A upon the number of projectile encounters hA is projectile independent. This leads to a new kind of scaling. Further, the scaled multiplicity distributions are found independent on the nature and energy of the incident hadron in the energy range ≅ (17.2-300) GeV. (orig.)
A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM
Directory of Open Access Journals (Sweden)
F. NURIYEVA
2017-06-01
Full Text Available The Multiple Traveling Salesman Problem (mTSP is a combinatorial optimization problem in NP-hard class. The mTSP aims to acquire the minimum cost for traveling a given set of cities by assigning each of them to a different salesman in order to create m number of tours. This paper presents a new heuristic algorithm based on the shortest path algorithm to find a solution for the mTSP. The proposed method has been programmed in C language and its performance analysis has been carried out on the library instances. The computational results show the efficiency of this method.
The problem of scale in planetary geomorphology
Rossbacher, L. A.
1985-01-01
Recent planetary exploration has shown that specific landforms exhibit a significant range in size between planets. Similar features on Earth and Mars offer some of the best examples of this scale difference. The difference in heights of volcanic features between the two planets has been cited often; the Martian volcano Olympus Mons stands approximately 26 km high, but Mauna Loa rises only 11 km above the Pacific Ocean floor. Polygonally fractured ground in the northern plains of Mars has diameters up to 20 km across; the largest terrestrial polygons are only 500 m in diameter. Mars also has landslides, aeolian features, and apparent rift valleys larger than any known on Earth. No single factor can explain the variations in landform size between planets. Controls on variation on Earth, related to climate, lithology, or elevation, have seldom been considered in detail. The size differences between features on Earth and other planets seem to be caused by a complex group of interacting relationships. The major planetary parameters that may affect landform size are discussed.
Multiple regression for physiological data analysis: the problem of multicollinearity.
Slinker, B K; Glantz, S A
1985-07-01
Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.
Scaling of charged particle multiplicity distributions in relativistic nuclear collisions
International Nuclear Information System (INIS)
Ahamd, N.; Hushnud; Azmi, M.D.; Zafar, M.; Irfan, M.; Khan, M.M.; Tufail, A.
2011-01-01
Validity of KNO scaling in hadron-hadron and hadron-nucleus collisions has been tested by several workers. Multiplicity distributions for p-emulsion interactions are found to be consistent with the KNO scaling hypothesis for pp collisions. The applicability of the scaling law was extended to FNAL energies by earlier workers. Slattery has shown that KNO scaling hypothesis is in fine agreement with the data for pp interactions over a wide range of incident energies. An attempt, is, therefore, made to examine the scaling hypothesis using multiplicity distributions of particles produced in 3.7A GeV/c 16 O-, 4.5A GeV/c and 14.5A GeV/c 28 Si - nucleus interactions
Topology Optimization of Large Scale Stokes Flow Problems
DEFF Research Database (Denmark)
Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan
2008-01-01
This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....
Multiple Charging Station Location-Routing Problem with Time Window of Electric Vehicle
Directory of Open Access Journals (Sweden)
Wang Li-ying
2015-11-01
Full Text Available This paper presents the electric vehicle (EV multiple charging station location-routing problem with time window to optimize the routing plan of capacitated EVs and the strategy of charging stations. In particular, the strategy of charging stations includes both infrastructure-type selection and station location decisions. The problem accounts for two critical constraints in logistic practice: the vehicle loading capacity and the customer time windows. A hybrid heuristic that incorporates an adaptive variable neighborhood search (AVNS with the tabu search algorithm for intensification was developed to address the problem. The specialized neighborhood structures and the selection methods of charging station used in the shaking step of AVNS were proposed. In contrast to the commercial solver CPLEX, experimental results on small-scale test instances demonstrate that the algorithm can find nearly optimal solutions on small-scale instances. The results on large-scale instances also show the effectiveness of the algorithm.
Local supersymmetry and the problem of the mass scales
International Nuclear Information System (INIS)
Nilles, H.P.
1983-02-01
Spontaneously broken supergravity might help us to understand the puzzle of the mass scales in grand unified models. We describe the general mechanism and point out the remaining problems. Some new results on local supercolor are presented
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Abstract Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) im...
Discussion of several problems in nuclear instrument scale
International Nuclear Information System (INIS)
Li Xuezhen; Zhou Sichun; Xiao Caijin
2005-01-01
The equipment scale is the first problem in measurement, including nuclear apparatus, otherwise there are different methods of equipment scale, then how to get the best way to seek the scale equation is the keystone of study. The article discusses several methods to get scale equation from the angle of error transformation, and compares their superiority, then gets the most precision method--Deming method, in addition, there is another simple and applied method, that is method of the mean, in the end, validates the theory through X fluorescence equipment scale. (authors)
Image Based Solution to Occlusion Problem for Multiple Robots Navigation
Directory of Open Access Journals (Sweden)
Taj Mohammad Khan
2012-04-01
Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.
Multiple scales in metapopulations of public goods producers
Bauer, Marianne; Frey, Erwin
2018-04-01
Multiple scales in metapopulations can give rise to paradoxical behavior: in a conceptual model for a public goods game, the species associated with a fitness cost due to the public good production can be stabilized in the well-mixed limit due to the mere existence of these scales. The scales in this model involve a length scale corresponding to separate patches, coupled by mobility, and separate time scales for reproduction and interaction with a local environment. Contrary to the well-mixed high mobility limit, we find that for low mobilities, the interaction rate progressively stabilizes this species due to stochastic effects, and that the formation of spatial patterns is not crucial for this stabilization.
Integrated Production-Distribution Scheduling Problem with Multiple Independent Manufacturers
Directory of Open Access Journals (Sweden)
Jianhong Hao
2015-01-01
Full Text Available We consider the nonstandard parts supply chain with a public service platform for machinery integration in China. The platform assigns orders placed by a machinery enterprise to multiple independent manufacturers who produce nonstandard parts and makes production schedule and batch delivery schedule for each manufacturer in a coordinate manner. Each manufacturer has only one plant with parallel machines and is located at a location far away from other manufacturers. Orders are first processed at the plants and then directly shipped from the plants to the enterprise in order to be finished before a given deadline. We study the above integrated production-distribution scheduling problem with multiple manufacturers to maximize a weight sum of the profit of each manufacturer under the constraints that all orders are finished before the deadline and the profit of each manufacturer is not negative. According to the optimal condition analysis, we formulate the problem as a mixed integer programming model and use CPLEX to solve it.
Learning of Rule Ensembles for Multiple Attribute Ranking Problems
Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman; Szeląg, Marcin
In this paper, we consider the multiple attribute ranking problem from a Machine Learning perspective. We propose two approaches to statistical learning of an ensemble of decision rules from decision examples provided by the Decision Maker in terms of pairwise comparisons of some objects. The first approach consists in learning a preference function defining a binary preference relation for a pair of objects. The result of application of this function on all pairs of objects to be ranked is then exploited using the Net Flow Score procedure, giving a linear ranking of objects. The second approach consists in learning a utility function for single objects. The utility function also gives a linear ranking of objects. In both approaches, the learning is based on the boosting technique. The presented approaches to Preference Learning share good properties of the decision rule preference model and have good performance in the massive-data learning problems. As Preference Learning and Multiple Attribute Decision Aiding share many concepts and methodological issues, in the introduction, we review some aspects bridging these two fields. To illustrate the two approaches proposed in this paper, we solve with them a toy example concerning the ranking of a set of cars evaluated by multiple attributes. Then, we perform a large data experiment on real data sets. The first data set concerns credit rating. Since recent research in the field of Preference Learning is motivated by the increasing role of modeling preferences in recommender systems and information retrieval, we chose two other massive data sets from this area - one comes from movie recommender system MovieLens, and the other concerns ranking of text documents from 20 Newsgroups data set.
Multiple Choice Knapsack Problem: example of planning choice in transportation.
Zhong, Tao; Young, Rhonda
2010-05-01
Transportation programming, a process of selecting projects for funding given budget and other constraints, is becoming more complex as a result of new federal laws, local planning regulations, and increased public involvement. This article describes the use of an integer programming tool, Multiple Choice Knapsack Problem (MCKP), to provide optimal solutions to transportation programming problems in cases where alternative versions of projects are under consideration. In this paper, optimization methods for use in the transportation programming process are compared and then the process of building and solving the optimization problems is discussed. The concepts about the use of MCKP are presented and a real-world transportation programming example at various budget levels is provided. This article illustrates how the use of MCKP addresses the modern complexities and provides timely solutions in transportation programming practice. While the article uses transportation programming as a case study, MCKP can be useful in other fields where a similar decision among a subset of the alternatives is required. Copyright 2009 Elsevier Ltd. All rights reserved.
A multiple-scale power series method for solving nonlinear ordinary differential equations
Directory of Open Access Journals (Sweden)
Chein-Shan Liu
2016-02-01
Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.
International Nuclear Information System (INIS)
McCurdy, C William; MartIn, Fernando
2004-01-01
B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling
Vehicle Routing Problem with Backhaul, Multiple Trips and Time Window
Directory of Open Access Journals (Sweden)
Johan Oscar Ong
2011-01-01
Full Text Available Transportation planning is one of the important components to increase efficiency and effectiveness in the supply chain system. Good planning will give a saving in total cost of the supply chain. This paper develops the new VRP variants’, VRP with backhauls, multiple trips, and time window (VRPBMTTW along with its problem solving techniques by using Ant Colony Optimization (ACO and Sequential Insertion as initial solution algorithm. ACO is modified by adding the decoding process in order to determine the number of vehicles, total duration time, and range of duration time regardless of checking capacity constraint and time window. This algorithm is tested by using set of random data and verified as well as analyzed its parameter changing’s. The computational results for hypothetical data with 50% backhaul and mix time windows are reported.
Functional analysis screening for multiple topographies of problem behavior.
Bell, Marlesha C; Fahmie, Tara A
2018-04-23
The current study evaluated a screening procedure for multiple topographies of problem behavior in the context of an ongoing functional analysis. Experimenters analyzed the function of a topography of primary concern while collecting data on topographies of secondary concern. We used visual analysis to predict the function of secondary topographies and a subsequent functional analysis to test those predictions. Results showed that a general function was accurately predicted for five of six (83%) secondary topographies. A specific function was predicted and supported for a subset of these topographies. The experimenters discuss the implication of these results for clinicians who have limited time for functional assessment. © 2018 Society for the Experimental Analysis of Behavior.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Effects of dependence in high-dimensional multiple testing problems
Directory of Open Access Journals (Sweden)
van de Wiel Mark A
2008-02-01
Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.
Dressed skeleton expansion and the coupling scale ambiguity problem
International Nuclear Information System (INIS)
Lu, Hung Jung.
1992-09-01
Perturbative expansions in quantum field theories are usually expressed in powers of a coupling constant. In principle, the infinite sum of the expansion series is independent of the renormalization scale of the coupling constant. In practice, there is a remnant dependence of the truncated series on the renormalization scale. This scale ambiguity can severely restrict the predictive power of theoretical calculations. The dressed skeleton expansion is developed as a calculational method which avoids the coupling scale ambiguity problem. In this method, physical quantities are expressed as functional expansions in terms of a coupling vertex function. The arguments of the vertex function are given by the physical momenta of each process. These physical momenta effectively replace the unspecified renormalization scale and eliminate the ambiguity problem. This method is applied to various field theoretical models and its main features and limitations are explored. For quantum chromodynamics, an expression for the running coupling constant of the three-gluon vertex is obtained. The effective coupling scale of this vertex is shown to be essentially given by μ 2 ∼ Q min 2 Q med 2 /Q max 2 where Q min 2 Q med 2 /Q max 2 are respectively the smallest, the next-to-smallest and the largest scale among the three gluon virtualities. This functional form suggests that the three-gluon vertex becomes non-perturbative at asymmetric momentum configurations. Implications for four-jet physics is discussed
Bonus algorithm for large scale stochastic nonlinear programming problems
Diwekar, Urmila
2015-01-01
This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...
Problems of allometric scaling analysis : Examples from mammalian reproductive biology
Martin, RD; Genoud, M; Hemelrijk, CK
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric
Solving Large Scale Crew Scheduling Problems in Practice
E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha
2010-01-01
textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base
Sparing land for biodiversity at multiple spatial scales
Directory of Open Access Journals (Sweden)
Johan eEkroos
2016-01-01
Full Text Available A common approach to the conservation of farmland biodiversity and the promotion of multifunctional landscapes, particularly in landscapes containing only small remnants of non-crop habitats, has been to maintain landscape heterogeneity and reduce land-use intensity. In contrast, it has recently been shown that devoting specific areas of non-crop habitats to conservation, segregated from high-yielding farmland (‘land sparing’, can more effectively conserve biodiversity than promoting low-yielding, less intensively managed farmland occupying larger areas (‘land sharing’. In the present paper we suggest that the debate over the relative merits of land sparing or land sharing is partly blurred by the differing spatial scales at which it is suggested that land sparing should be applied. We argue that there is no single correct spatial scale for segregating biodiversity protection and commodity production in multifunctional landscapes. Instead we propose an alternative conceptual construct, which we call ‘multiple-scale land sparing’, targeting biodiversity and ecosystem services in transformed landscapes. We discuss how multiple-scale land sparing may overcome the apparent dichotomy between land sharing and land sparing and help to find acceptable compromises that conserve biodiversity and landscape multifunctionality.
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29
Directory of Open Access Journals (Sweden)
Misajon Rose
2009-06-01
Full Text Available Abstract Background Multiple Sclerosis (MS is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29 is a disease-specific health-related quality of life (HRQoL instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS and psychological (MSIS-29-PSYCH impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings.
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445
Understanding hydraulic fracturing: a multi-scale problem
Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.
2016-01-01
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789
Multiple-scale approach for the expansion scaling of superfluid quantum gases
International Nuclear Information System (INIS)
Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.
2011-01-01
We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.
Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems
Directory of Open Access Journals (Sweden)
Oluyemi Oyeniran
2016-01-01
Full Text Available The problem of estimating the proportion, π0, of the true null hypotheses in a multiple testing problem is important in cases where large scale parallel hypotheses tests are performed independently. While the problem is a quantity of interest in its own right in applications, the estimate of π0 can be used for assessing or controlling an overall false discovery rate. In this article, we develop an innovative nonparametric maximum likelihood approach to estimate π0. The nonparametric likelihood is proposed to be restricted to multinomial models and an EM algorithm is also developed to approximate the estimate of π0. Simulation studies show that the proposed method outperforms other existing methods. Using experimental microarray datasets, we demonstrate that the new method provides satisfactory estimate in practice.
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Efficient Selection of Multiple Objects on a Large Scale
DEFF Research Database (Denmark)
Stenholt, Rasmus
2012-01-01
The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The diffi- culty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may...... consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automati- cally selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance...
Curvaton paradigm can accommodate multiple low inflation scales
International Nuclear Information System (INIS)
Matsuda, Tomohiro
2004-01-01
Recent arguments show that some curvaton field may generate the cosmological curvature perturbation. As the curvaton is independent of the inflaton field, there is a hope that the fine tunings of inflation models can be cured by the curvaton scenario. More recently, however, Lyth discussed that there is a strong bound for the Hubble parameter during inflation even if one assumes the curvaton scenario. Although the most serious constraint was evaded, the bound seems rather crucial for many models of a low inflation scale. In this paper we try to remove the constraint. We show that the bound is drastically modified if there were multiple stages of inflation. (letter to the editor)
A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales
Tomin, P.; Lunati, I.
2015-12-01
Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)
The renormalization scale-setting problem in QCD
Energy Technology Data Exchange (ETDEWEB)
Wu, Xing-Gang [Chongqing Univ. (China); Brodsky, Stanley J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Mojaza, Matin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Univ. of Southern Denmark, Odense (Denmark)
2013-09-01
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scale ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending
Rank Dynamics of Word Usage at Multiple Scales
Directory of Open Access Journals (Sweden)
José A. Morales
2018-05-01
Full Text Available The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.
Multiple time scales of adaptation in auditory cortex neurons.
Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel
2004-11-17
Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.
Classification of Farmland Landscape Structure in Multiple Scales
Jiang, P.; Cheng, Q.; Li, M.
2017-12-01
Farmland is one of the basic terrestrial resources that support the development and survival of human beings and thus plays a crucial role in the national security of every country. Pattern change is the intuitively spatial representation of the scale and quality variation of farmland. Through the characteristic development of spatial shapes as well as through changes in system structures, functions and so on, farmland landscape patterns may indicate the landscape health level. Currently, it is still difficult to perform positioning analyses of landscape pattern changes that reflect the landscape structure variations of farmland with an index model. Depending on a number of spatial properties such as locations and adjacency relations, distance decay, fringe effect, and on the model of patch-corridor-matrix that is applied, this study defines a type system of farmland landscape structure on the national, provincial, and city levels. According to such a definition, the classification model of farmland landscape-structure type at the pixel scale is developed and validated based on mathematical-morphology concepts and on spatial-analysis methods. Then, the laws that govern farmland landscape-pattern change in multiple scales are analyzed from the perspectives of spatial heterogeneity, spatio-temporal evolution, and function transformation. The result shows that the classification model of farmland landscape-structure type can reflect farmland landscape-pattern change and its effects on farmland production function. Moreover, farmland landscape change in different scales displayed significant disparity in zonality, both within specific regions and in urban-rural areas.
Problems of allometric scaling analysis: examples from mammalian reproductive biology.
Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K
2005-05-01
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best
Validity and Reliability of the Turkish Version of the Monitoring My Multiple Sclerosis Scale.
Polat, Cansu; Tülek, Zeliha; Kürtüncü, Murat; Eraksoy, Mefkure
2017-06-01
This research was conducted to adapt the Monitoring My Multiple Sclerosis (MMMS) scale, which is a scale used for self-evaluation by multiple sclerosis (MS) patients of their own health and quality of life, to Turkish and to determine the psychometric properties of the scale. The methodological research was conducted in the outpatient MS clinic of a university hospital between January and September 2013. The sample in this study consisted of 140 patients aged above 18 who had a diagnosis of definite MS. Patients who experienced attacks in the previous month or had any serious medical problems other than MS were not included in the group. The linguistic validity of MMMS was tested by a backward-forward translation method and an expert panel. Reliability analysis was performed using test-retest correlations, item-total correlations, and internal consistency analysis. Confirmatory factor analysis and concurrent validity were used to determine the construct validity. The Multiple Sclerosis Quality of Life-54 instrument was used to determine concurrent validity and the Expanded Disability Status Scale, Hospital Anxiety and Depression Scale, and Mini Mental State Examination were used for further determination of the construct validity. We determined that the scale consisted of four factors with loadings ranging from 0.49 to 0.79. The correlation coefficients of the scale were determined to be between 0.47 and 0.76 for item-total score and between 0.60 and 0.81 for items and subscale scores. Cronbach's alpha coefficient was determined to be 0.94 for the entire scale and between 0.64 and 0.89 for the subscales. Test-retest correlations were significant. Correlations between MMMS and other scales were also found to be significant. The Turkish MMMS provides adequate validity and reliability for assessing the impact of MS on quality of life and health status in patients.
Integral criteria for large-scale multiple fingerprint solutions
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Leadership solves collective action problems in small-scale societies
Glowacki, Luke; von Rueden, Chris
2015-01-01
Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. PMID:26503683
Leadership solves collective action problems in small-scale societies.
Glowacki, Luke; von Rueden, Chris
2015-12-05
Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. © 2015 The Author(s).
Small Scale Variability and the Problem of Data Validation
Sparling, L. C.; Avallone, L.; Einaudi, Franco (Technical Monitor)
2000-01-01
Numerous measurements taken with a variety of airborne, balloon borne and ground based instruments over the past decade have revealed a complex multiscaled 3D structure in both chemical and dynamical fields in the upper troposphere/lower stratosphere. The variability occurs on scales that are well below the resolution of satellite measurements, leading to problems in measurement validation. We discuss some statistical ideas that can shed some light on the contribution of the natural variability to the inevitable differences in correlative measurements that are not strictly colocated, or that have different spatial resolution.
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Solving Multiple Timetabling Problems at Danish High Schools
DEFF Research Database (Denmark)
Kristiansen, Simon
name; Elective Course Student Sectioning. The problem is solved using ALNS and solutions are proven to be close to optimum. The algorithm has been implemented and made available for the majority of the high schools in Denmark. The second Student Sectioning problem presented is the sectioning of each...... high schools. Two types of consultations are presented; the Parental Consultation Timetabling Problem (PCTP) and the Supervisor Consultation Timetabling Problem (SCTP). One mathematical model containing both consultation types has been created and solved using an ALNS approach. The received solutions...... problems as mathematical models and solve them using operational research techniques. Two of the models and the suggested solution methods have resulted in implementations in an actual decision support software, and are hence available for the majority of the high schools in Denmark. These implementations...
Problems of large-scale vertically-integrated aquaculture
Energy Technology Data Exchange (ETDEWEB)
Webber, H H; Riordan, P F
1976-01-01
The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.
Multiple-instance learning as a classifier combining problem
DEFF Research Database (Denmark)
Li, Yan; Tax, David M. J.; Duin, Robert P. W.
2013-01-01
In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass...
A Novel Efficient Graph Model for the Multiple Longest Common Subsequences (MLCS Problem
Directory of Open Access Journals (Sweden)
Zhan Peng
2017-08-01
Full Text Available Searching for the Multiple Longest Common Subsequences (MLCS of multiple sequences is a classical NP-hard problem, which has been used in many applications. One of the most effective exact approaches for the MLCS problem is based on dominant point graph, which is a kind of directed acyclic graph (DAG. However, the time and space efficiency of the leading dominant point graph based approaches is still unsatisfactory: constructing the dominated point graph used by these approaches requires a huge amount of time and space, which hinders the applications of these approaches to large-scale and long sequences. To address this issue, in this paper, we propose a new time and space efficient graph model called the Leveled-DAG for the MLCS problem. The Leveled-DAG can timely eliminate all the nodes in the graph that cannot contribute to the construction of MLCS during constructing. At any moment, only the current level and some previously generated nodes in the graph need to be kept in memory, which can greatly reduce the memory consumption. Also, the final graph contains only one node in which all of the wanted MLCS are saved, thus, no additional operations for searching the MLCS are needed. The experiments are conducted on real biological sequences with different numbers and lengths respectively, and the proposed algorithm is compared with three state-of-the-art algorithms. The experimental results show that the time and space needed for the Leveled-DAG approach are smaller than those for the compared algorithms especially on large-scale and long sequences.
Enabling High Performance Large Scale Dense Problems through KBLAS
Abdelfattah, Ahmad
2014-05-04
KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.
Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli
2017-05-01
This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.
Multiple scales and phases in discrete chains with application to folded proteins
Sinelnikova, A.; Niemi, A. J.; Nilsson, Johan; Ulybyshev, M.
2018-05-01
Chiral heteropolymers such as large globular proteins can simultaneously support multiple length scales. The interplay between the different scales brings about conformational diversity, determines the phase properties of the polymer chain, and governs the structure of the energy landscape. Most importantly, multiple scales produce complex dynamics that enable proteins to sustain live matter. However, at the moment there is incomplete understanding of how to identify and distinguish the various scales that determine the structure and dynamics of a complex protein. Here we address this impending problem. We develop a methodology with the potential to systematically identify different length scales, in the general case of a linear polymer chain. For this we introduce and analyze the properties of an order parameter that can both reveal the presence of different length scales and can also probe the phase structure. We first develop our concepts in the case of chiral homopolymers. We introduce a variant of Kadanoff's block-spin transformation to coarse grain piecewise linear chains, such as the C α backbone of a protein. We derive analytically, and then verify numerically, a number of properties that the order parameter can display, in the case of a chiral polymer chain. In particular, we propose that in the case of a chiral heteropolymer the order parameter can reveal traits of several different phases, contingent on the length scale at which it is scrutinized. We confirm that this is the case with crystallographic protein structures in the Protein Data Bank. Thus our results suggest relations between the scales, the phases, and the complexity of folding pathways.
On Distance Scale Bias due to Stellar Multiplicity and Associations
Anderson, Richard I.; Riess, Adam
2018-01-01
The Cepheid Period-luminosity relation (Leavitt Law) provides the most accurate footing for the cosmic distance scale (CDS). Recently, evidence has been presented that the value of the Hubble constant H0 measured via the cosmic distance scale differs by 3.4σ from the value inferred using Planck data assuming ΛCDM cosmology (Riess et al. 2016). This exciting result may point to missing physics in the cosmological model; however, before such a claim can be made, careful analyses must address possible systematics involved in the calibration of the CDS.A frequently made claim in the literature is that companion stars or cluster membership of Cepheids may bias the calibration of the CDS. To evaluate this claim, we have carried out the first detailed study of the impact of Cepheid multiplicity and cluster membership on the determination of H0. Using deep HST imaging of M31 we directly measured the mean photometric bias due to cluster companions on Cepheid-based distances. Together with the empirical determination of the frequency with which Cepheids appear in clusters we quantify the combined H0 bias from close associations to be approximately 0.3% (0.20 km s-1 Mpc-1) for the passbands commonly used. Thus, we demonstrate that stellar associations cannot explain the aforementioned discrepancy observed in H0 and do not prevent achieving the community goal of measuring H0 with an accuracy of 1%. We emphasize the subtle, but important, difference between systematics relevant for calibrating the Leavitt Law (achieving a better understanding of stellar physics) and for accurately calibrating the CDS (measuring H0).
Time and multiple objectives in scheduling and routing problems
Dabia, S.
2012-01-01
Many optimization problems encountered in practice are multi-objective by nature, i.e., different objectives are conflicting and equally important. Many times, it is not desirable to drop some of them or to optimize them in a composite single objective or hierarchical manner. Furthermore, cost
Neural Computations in a Dynamical System with Multiple Time Scales.
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.
Child outcomes of home-visiting for families with complex and multiple problems
van Assen, Arend; Dickscheit, Jana; Post, Wendy; Grietens, Hans
2016-01-01
Introduction Families with complex and multiple problems are faced with an accumulation of problems across multiple areas of life. Furthermore, these families are often considered to be ‘difficult to treat’. Children and teenagers growing up in these families are exposed to an accumulation of risks
Scaling the robustness of the solutions for quantum controllable problems
International Nuclear Information System (INIS)
Kallush, S.; Kosloff, R.
2011-01-01
The major task in quantum control theory is to find an external field that transforms the system from one state to another or executes a predetermined unitary transformation. We investigate the difficulty of computing the control field as the size of the Hilbert space is increased. In the models studied the controls form a small closed subalgebra of operators. Complete controllability is obtained by the commutators of the controls with the stationary Hamiltonian. We investigate the scaling of the computation effort required to converge a solution for the quantum control task with respect to the size of the Hilbert space. The models studied include the double-well Bose Hubbard model with the SU(2) control subalgebra and the Morse oscillator with the Heisenberg-Weil algebra. We find that for initial and target states that are classified as generalized coherent states (GCSs) of the control subalgebra the control field is easily found independent of the size of the Hilbert space. For such problems, a control field generated for a small system can serve as a pilot for finding the field for larger systems. Attempting to employ pilot fields that generate superpositions of GCSs or cat states failed. No relation was found between control solutions of different Hilbert space sizes. In addition the task of finding such a field scales unfavorably with Hilbert space sizes. We demonstrate the use of symmetry to obtain quantum transitions between states without phase information. Implications to quantum computing are discussed.
Implicit solvers for large-scale nonlinear problems
International Nuclear Information System (INIS)
Keyes, David E; Reynolds, Daniel R; Woodward, Carol S
2006-01-01
Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications
Directory of Open Access Journals (Sweden)
Wayne R. Munns, Jr.
2006-06-01
Full Text Available Wildlife populations are experiencing increasing pressure from human-induced changes in the landscape. Stressors including agricultural and urban land use, introduced invasive and exotic species, nutrient enrichment, direct human disturbance, and toxic chemicals directly or indirectly influence the quality and quantity of habitat used by terrestrial and aquatic wildlife. Governmental agencies such as the U.S. Environmental Protection Agency are required to assess risks to wildlife populations, in its broadest definition, that result from exposure to these stressors, yet considerable uncertainty exists with respect to how such assessments should be conducted. This uncertainty is compounded by questions concerning the interactive effects of co-occurring stressors, appropriate spatial scales of analysis, extrapolation of response data among species and from organisms to populations, and imperfect knowledge and use of limited data sets. Further, different risk problems require varying degrees of sophistication, methodological refinement, and data quality. These issues suggest a number of research needs to improve methods for wildlife risk assessments, including continued development of population dynamics models to evaluate the effects of multiple stressors at varying spatial scales, methods for extrapolating across endpoints and species with reasonable confidence, stressor-response relations and methods for combining them in predictive and diagnostic assessments, and accessible data sets describing the ecology of terrestrial and aquatic species. Case study application of models and methods for assessing wildlife risk will help to demonstrate their strengths and limitations for solving particular risk problems.
IRIS Arrays: Observing Wavefields at Multiple Scales and Frequencies
Sumy, D. F.; Woodward, R.; Frassetto, A.
2014-12-01
The Incorporated Research Institutions for Seismology (IRIS) provides instruments for creating and operating seismic arrays at a wide range of scales. As an example, for over thirty years the IRIS PASSCAL program has provided instruments to individual Principal Investigators to deploy arrays of all shapes and sizes on every continent. These arrays have ranged from just a few sensors to hundreds or even thousands of sensors, covering areas with dimensions of meters to thousands of kilometers. IRIS also operates arrays directly, such as the USArray Transportable Array (TA) as part of the EarthScope program. Since 2004, the TA has rolled across North America, at any given time spanning a swath of approximately 800 km by 2,500 km, and thus far sampling 2% of the Earth's surface. This achievement includes all of the lower-48 U.S., southernmost Canada, and now parts of Alaska. IRIS has also facilitated specialized arrays in polar environments and on the seafloor. In all cases, the data from these arrays are freely available to the scientific community. As the community of scientists who use IRIS facilities and data look to the future they have identified a clear need for new array capabilities. In particular, as part of its Wavefields Initiative, IRIS is exploring new technologies that can enable large, dense array deployments to record unaliased wavefields at a wide range of frequencies. Large-scale arrays might utilize multiple sensor technologies to best achieve observing objectives and optimize equipment and logistical costs. Improvements in packaging and power systems can provide equipment with reduced size, weight, and power that will reduce logistical constraints for large experiments, and can make a critical difference for deployments in harsh environments or other situations where rapid deployment is required. We will review the range of existing IRIS array capabilities with an overview of previous and current deployments and examples of data and results. We
Cosmological problems with multiple axion-like fields
International Nuclear Information System (INIS)
Mack, Katherine J.; Steinhardt, Paul J.
2011-01-01
Incorporating the QCD axion and simultaneously satisfying current constraints on the dark matter density and isocurvature fluctuations requires non-minimal fine-tuning of inflationary parameters or the axion misalignment angle (or both) for Peccei-Quinn symmetry-breaking scales f a > 10 12 GeV. To gauge the degree of tuning in models with many axion-like fields at similar symmetry-breaking scales and masses, as may occur in string theoretic models that include a QCD axion, we introduce a figure of merit F that measures the fractional volume of allowed parameter space: the product of the slow roll parameter ε and each of the axion misalignment angles, θ 0 . For a single axion, F∼ −11 is needed to avoid conflict with observations. We show that the fine tuning of F becomes exponentially more extreme in the case of numerous axion-like fields. Anthropic arguments are insufficient to explain the fine tuning because the bulk of the anthropically allowed parameter space is observationally ruled out by limits on the cosmic microwave background isocurvature modes. Therefore, this tuning presents a challenge to the compatibility of string-theoretic models with light axions and inflationary cosmology
Protective factors associated with fewer multiple problem behaviors among homeless/runaway youth.
Lightfoot, Marguerita; Stein, Judith A; Tevendale, Heather; Preston, Kathleen
2011-01-01
Although homeless youth exhibit numerous problem behaviors, protective factors that can be targeted and modified by prevention programs to decrease the likelihood of involvement in risky behaviors are less apparent. The current study tested a model of protective factors for multiple problem behavior in a sample of 474 homeless youth (42% girls; 83% minority) ages 12 to 24 years. Higher levels of problem solving and planning skills were strongly related to lower levels of multiple problem behaviors in homeless youth, suggesting both the positive impact of preexisting personal assets of these youth and important programmatic targets for further building their resilience and decreasing problem behaviors. Indirect relationships between the background factors of self-esteem and social support and multiple problem behaviors were significantly mediated through protective skills. The model suggests that helping youth enhance their skills in goal setting, decision making, and self-reliant coping could lessen a variety of problem behaviors commonly found among homeless youth.
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
An algorithm to compute a rule for division problems with multiple references
Directory of Open Access Journals (Sweden)
Sánchez Sánchez, Francisca J.
2012-01-01
Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.
Thompson, William F.; Kuske, Rachel A.; Monahan, Adam H.
2017-11-01
Stochastic averaging problems with Gaussian forcing have been the subject of numerous studies, but far less attention has been paid to problems with infinite-variance stochastic forcing, such as an α-stable noise process. It has been shown that simple linear systems driven by correlated additive and multiplicative (CAM) Gaussian noise, which emerge in the context of reduced atmosphere and ocean dynamics, have infinite variance in certain parameter regimes. In this study, we consider the stochastic averaging of systems where a linear CAM noise process in the infinite variance parameter regime drives a comparatively slow process. We use (semi)-analytical approximations combined with numerical illustrations to compare the averaged process to one that is forced by a white α-stable process, demonstrating consistent properties in the case of large time-scale separation. We identify the conditions required for the fast linear CAM process to have such an influence in driving a slower process and then derive an (effectively) equivalent fast, infinite-variance process for which an existing stochastic averaging approximation is readily applied. The results are illustrated using numerical simulations of a set of example systems.
[AIDS in Chile: a problem with multiple facets].
Ormazabal, B
1991-03-01
Chile's 1st case of AIDS was diagnosed in 1984. Some 250 AIDS cases and 1600 HIV positive persons have since been reported, although the actual number by some estimates may reach 5000. Chile, although in the initial stages of the epidemic, already has a serious problem which at present can only be combatted through education. It will be necessary to convince the population that significant modifications of sexual behavior are needed to control the spread of the disease. Education for AIDS prevention is a priority of the National Commission on AIDS (CONASIDA), which is basing its program on the premise that stable monogamy is the most natural form of expression of a couple. Manuals for prevention are under development, and the 1st, for health workers and the general population, is in process of publication. A series of pamphlets and educational videos for workers in sexually transmitted disease clinics are under development. Educational materials are also being created for specific groups such as university students and agricultural workers and for groups at high risk. A social communications campaign has been prepared and approved by the authorities, and is awaiting funding for dissemination. Education of the population is also a concern for the Catholic Church, which views reinforcement of the family and its mission of providing sex education as a primary means of preventing AIDS. CONASIDA is also responsible for epidemiological study of AIDS in Chile through surveillance of sentinel groups and in quality control of the blood supply. Condoms are to be distributed in sexually transmitted disease clinics for the purpose of AIDS prevention.
International Nuclear Information System (INIS)
Botet, R.
1996-01-01
A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)
Materials and nanosystems : interdisciplinary computational modeling at multiple scales
International Nuclear Information System (INIS)
Huber, S.E.
2014-01-01
Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of
The Core Problem within a Linear Approximation Problem $AX/approx B$ with Multiple Right-Hand Sides
Czech Academy of Sciences Publication Activity Database
Hnětynková, Iveta; Plešinger, Martin; Strakoš, Z.
2013-01-01
Roč. 34, č. 3 (2013), s. 917-931 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Grant - others:GA ČR(CZ) GA201/09/0917; GA MŠk(CZ) EE2.3.09.0155; GA MŠk(CZ) EE2.3.30.0065 Program:GA Institutional support: RVO:67985807 Keywords : total least squares problem * multiple right-hand sides * core problem * linear approximation problem * error-in-variables modeling * orthogonal regression * singular value decomposition Subject RIV: BA - General Mathematics Impact factor: 1.806, year: 2013
Interplay between multiple length and time scales in complex ...
Indian Academy of Sciences (India)
Administrator
Processes in complex chemical systems, such as macromolecules, electrolytes, interfaces, ... by processes operating on a multiplicity of length .... real time. The design and interpretation of femto- second experiments has required considerable ...
Sole, Marla A.
2016-01-01
Open-ended questions that can be solved using different strategies help students learn and integrate content, and provide teachers with greater insights into students' unique capabilities and levels of understanding. This article provides a problem that was modified to allow for multiple approaches. Students tended to employ high-powered, complex,…
Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim
2018-04-10
There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.
DEFF Research Database (Denmark)
Petersen, Hanne Løhmann; Madsen, Oli B.G.
2009-01-01
This paper introduces the double travelling salesman problem with multiple stacks and presents four different metaheuristic approaches to its solution. The double TSP with multiple stacks is concerned with determining the shortest route performing pickups and deliveries in two separated networks...
Energy Technology Data Exchange (ETDEWEB)
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Scaled multiple holes suction tip for microneurosurgery; Technical note
Directory of Open Access Journals (Sweden)
Abdolkarim Rahmanian, Associate Professor of Neurosurgery
2017-12-01
Conclusion: The new suction tip permits easy and precise adjustment of suction power in microneurosirgical operations. Our scaled 3 and 4-hole suction tip is a simple and useful device for controlling the suction power during the microneurosurgeical procedures.
Multiple dynamical time-scales in networks with hierarchically
Indian Academy of Sciences (India)
Modular networks; hierarchical organization; synchronization. ... we show that such a topological structure gives rise to characteristic time-scale separation ... This suggests a possible functional role of such mesoscopic organization principle in ...
The Great Chains of Computing: Informatics at Multiple Scales
Directory of Open Access Journals (Sweden)
Kevin Kirby
2011-10-01
Full Text Available The perspective from which information processing is pervasive in the universe has proven to be an increasingly productive one. Phenomena from the quantum level to social networks have commonalities that can be usefully explicated using principles of informatics. We argue that the notion of scale is particularly salient here. An appreciation of what is invariant and what is emergent across scales, and of the variety of different types of scales, establishes a useful foundation for the transdiscipline of informatics. We survey the notion of scale and use it to explore the characteristic features of information statics (data, kinematics (communication, and dynamics (processing. We then explore the analogy to the principles of plenitude and continuity that feature in Western thought, under the name of the "great chain of being", from Plato through Leibniz and beyond, and show that the pancomputational turn is a modern counterpart of this ruling idea. We conclude by arguing that this broader perspective can enhance informatics pedagogy.
Microstructural evolution at multiple scales during plastic deformation
DEFF Research Database (Denmark)
Winther, Grethe
During plastic deformation metals develop microstructures which may be analysed on several scales, e.g. bulk textures, the scale of individual grains, intragranular phenomena in the form of orientation spreads as well as dislocation patterning by formation of dislocation boundaries in metals of m......, which is backed up by experimental data [McCabe et al. 2004; Wei et al., 2011; Hong, Huang, & Winther, 2013]. The current state of understanding as well as the major challenges are discusse....
Designing and using multiple-possibility physics problems in physics courses
Shekoyan, Vazgen
2012-02-01
One important aspect of physics instruction is helping students develop better problem solving expertise. Besides enhancing the content knowledge, problems help students develop different cognitive abilities and skills. This presentation focuses on multiple-possibility problems (alternatively called ill-structured problems). These problems are different from traditional ``end of chapter'' single-possibility problems. They do not have one right answer and thus the student has to examine different possibilities, assumptions and evaluate the outcomes. To solve such problems one has to engage in a cognitive monitoring called epistemic cognition. It is an important part of thinking in real life. Physicists routinely use epistemic cognition when they solve problems. I have explored the instructional value of using such problems in introductory physics courses.
Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis
Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying
2012-01-01
This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…
On Solution of Total Least Squares Problems with Multiple Right-hand Sides
Czech Academy of Sciences Publication Activity Database
Hnětynková, I.; Plešinger, Martin; Strakoš, Zdeněk
2008-01-01
Roč. 8, č. 1 (2008), s. 10815-10816 ISSN 1617-7061 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares problem * multiple right-hand sides * linear approximation problem Subject RIV: BA - General Mathematics
The computer-aided design of a servo system as a multiple-criteria decision problem
Udink ten Cate, A.J.
1986-01-01
This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies
National Earthquake Information Center Seismic Event Detections on Multiple Scales
Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10
Optimization of constrained multiple-objective reliability problems using evolutionary algorithms
International Nuclear Information System (INIS)
Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.
2006-01-01
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature
Optimization of constrained multiple-objective reliability problems using evolutionary algorithms
Energy Technology Data Exchange (ETDEWEB)
Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es
2006-09-15
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.
Multiple Scale Analysis of the Dynamic State Index (DSI)
Müller, A.; Névir, P.
2016-12-01
The Dynamic State Index (DSI) is a novel parameter that indicates local deviations of the atmospheric flow field from a stationary, inviscid and adiabatic solution of the primitive equations of fluid mechanics. This is in contrast to classical methods, which often diagnose deviations from temporal or spatial mean states. We show some applications of the DSI to atmospheric flow phenomena on different scales. The DSI is derived from the Energy-Vorticity-Theory (EVT) which is based on two global conserved quantities, the total energy and Ertel's potential enstrophy. Locally, these global quantities lead to the Bernoulli function and the PV building together with the potential temperature the DSI.If the Bernoulli function and the PV are balanced, the DSI vanishes and the basic state is obtained. Deviations from the basic state provide an indication of diabatic and non-stationary weather events. Therefore, the DSI offers a tool to diagnose and even prognose different atmospheric events on different scales.On synoptic scale, the DSI can help to diagnose storms and hurricanes, where also the dipole structure of the DSI plays an important role. In the scope of the collaborative research center "Scaling Cascades in Complex Systems" we show high correlations between the DSI and precipitation on convective scale. Moreover, we compare the results with reduced models and different spatial resolutions.
Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021
Solvability of the Core Problem with Multiple Right-Hand Sides in the TLS Sense
Czech Academy of Sciences Publication Activity Database
Hnětynková, Iveta; Plešinger, M.; Sima, D.M.
2016-01-01
Roč. 37, č. 3 (2016), s. 861-876 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : total least squares (TLS) problem * multiple right-hand sides * core problem * linear approximation problem * error-in-variables modeling * orthogonal regression * classical TLS algorithm Subject RIV: BA - General Mathematics Impact factor: 2.194, year: 2016
A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm
International Nuclear Information System (INIS)
Kamleh, Waseem
2011-01-01
Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.
From genes to landscapes: conserving biodiversity at multiple scales.
Sally. Duncan
2000-01-01
Biodiversity has at last become a familiar term outside of scientific circles. Ways of measuring it and mapping it are advancing and becoming more complex, but ways of deciding how to conserve it remain mixed at best, and the resources available to manage dimishing biodiversity are themselves scarce. One significant problem is that policy decisions are frequently at...
Choi, Sun Hee; Park, Young Sil; Shim, Kye Shik; Choi, Yong Sung; Chang, Ji Young; Hahn, Won Ho; Bae, Chong-Woo
2010-08-01
The aim of this study was to survey multiple birth data and to analyze the recent trends of multiple births and its consequences on perinatal problems in Korea from 1991 to 2008. Data were obtained from the Korean Statistical Information Service. The total number of multiple births showed increasing trends. The multiple birth rate was maintained within less than 10.0 for the decade from 1981 to 1990. However, it increased gradually to reach 27.5 in 2008. The maternal age for multiple births was higher than for total live births. The mean birth weight of the total live births was 3.23 kg; for the multiple births it was 2.40 kg in 2008. The incidence of low birth weight infants (LBWI) among total live births was 3.8% in 2000 and 4.9% in 2008. For multiple births it was 49.2% and 53.0% during the same years. The incidence of preterm births among total live births was 3.8% in 2000 and 5.5% in 2008; for the multiple births it was 38.3% and 51.5% during the same years. The incidence of multiple births and its consequences on perinatal problems (preterm, LBWI, and advanced-maternal age) have been increased steadily over the last two decades in Korea.
New ISR and SPS collider multiplicity data and the Golokhvastov generalization of the KNO scaling
International Nuclear Information System (INIS)
Szwed, R.; Wrochna, G.
1985-01-01
The generalization of KNO scaling proposed by Golokhvastov (KNO-G scaling) is tested using pp multiplicity data, in particular results of the new high precision ISR measurements. Since the data obey KNO-G scaling over the full energy range √s=2.51-62.2 GeV with the scaling function psi(z), having only one free parameter, the superiority of the KNO-G over the standard approach is clearly demonstrated. The extrapolation within KNO-G scaling to the SPS Collider energy range and a comparison with the recent UA5 multiplicity results is presented. (orig.)
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Nonlinear MHD dynamics of tokamak plasmas on multiple time scales
International Nuclear Information System (INIS)
Kruger, S.E.; Schnack, D.D.; Brennan, D.P.; Gianakon, T.A.; Sovinec, C.R.
2003-01-01
Two types of numerical, nonlinear simulations using the NIMROD code are presented. In the first simulation, we model the disruption occurring in DIII-D discharge 87009 as an ideal MHD instability driven unstable by neutral-beam heating. The mode grows faster than exponential, but on a time scale that is a hybrid of the heating rate and the ideal MHD growth rate as predicted by analytic theory. The second type of simulations, which occur on a much longer time scale, focus on the seeding of tearing modes by sawteeth. Pressure effects play a role both in the exterior region solutions and in the neoclassical drive terms. The results of both simulations are reviewed and their implications for experimental analysis is discussed. (author)
Human learning: Power laws or multiple characteristic time scales?
Directory of Open Access Journals (Sweden)
Gottfried Mayer-Kress
2006-09-01
Full Text Available The central proposal of A. Newell and Rosenbloom (1981 was that the power law is the ubiquitous law of learning. This proposition is discussed in the context of the key factors that led to the acceptance of the power law as the function of learning. We then outline the principles of an epigenetic landscape framework for considering the role of the characteristic time scales of learning and an approach to system identification of the processes of performance dynamics. In this view, the change of performance over time is the product of a superposition of characteristic exponential time scales that reflect the influence of different processes. This theoretical approach can reproduce the traditional power law of practice within the experimental resolution of performance data sets - but we hypothesize that this function may prove to be a special and perhaps idealized case of learning.
Numerical Investigation of Multiple-, Interacting-Scale Variable-Density Ground Water Flow Systems
Cosler, D.; Ibaraki, M.
2004-12-01
The goal of our study is to elucidate the nonlinear processes that are important for multiple-, interacting-scale flow and solute transport in subsurface environments. In particular, we are focusing on the influence of small-scale instability development on variable-density ground water flow behavior in large-scale systems. Convective mixing caused by these instabilities may mix the fluids to a greater extent than would be the case with classical, Fickian dispersion. Most current numerical schemes for interpreting field-scale variable-density flow systems do not explicitly account for the complexities caused by small-scale instabilities and treat such processes as "lumped" Fickian dispersive mixing. Such approaches may greatly underestimate the mixing behavior and misrepresent the overall large-scale flow field dynamics. The specific objectives of our study are: (i) to develop an adaptive (spatial and temporal scales) three-dimensional numerical model that is fully capable of simulating field-scale variable-density flow systems with fine resolution (~1 cm); and (ii) to evaluate the importance of scale-dependent process interactions by performing a series of simulations on different problem scales ranging from laboratory experiments to field settings, including an aquifer storage and freshwater recovery (ASR) system similar to those planned for the Florida Everglades and in-situ contaminant remediation systems. We are examining (1) methods to create instabilities in field-scale systems, (2) porous media heterogeneity effects, and (3) the relation between heterogeneity characteristics (e.g., permeability variance and correlation length scales) and the mixing scales that develop for varying degrees of unstable stratification. Applications of our work include the design of new water supply and conservation measures (e.g., ASR systems), assessment of saltwater intrusion problems in coastal aquifers, and the design of in-situ remediation systems for aquifer restoration
Gitchel, W. Dent; Roessler, Richard T.; Turner, Ronna C.
2011-01-01
Assessment is critical to rehabilitation practice and research, and self-reports are a commonly used form of assessment. This study examines a gender effect according to item wording on the "Perceived Stress Scale" for adults with multiple sclerosis. Past studies have demonstrated two-factor solutions on this scale and other scales measuring…
Transition in multiple-scale-lengths turbulence in plasmas
Energy Technology Data Exchange (ETDEWEB)
Itoh, S.-I.; Yagi, M.; Kawasaki, M.; Kitazawa, A. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, K. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2002-02-01
The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Statistical nonlinear interactions between semi-micro and micro modes are first kept in the analysis as the drag, noise and drive. The nonlinear dynamics determines both the fluctuation levels and the cross field turbulent transport for the fixed global parameters. A quenching or suppressing effect is induced by their nonlinear interplay, even if both modes are unstable when analyzed independently. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. The thermal fluctuation of the scale length of {lambda}{sub D} is assumed to be statistically independent. The hierarchical structure is constructed according to the scale lengths. Transitions in turbulence are found and phase diagrams with cusp type catastrophe are obtained. Dynamics is followed. Statistical properties of the subcritical excitation are discussed. The probability density function (PDF) and transition probability are obtained. Power-laws are obtained in the PDF as well as in the transition probability. Generalization for the case where turbulence is composed of three-classes of modes is also developed. A new catastrophe of turbulent sates is obtained. (author)
Transition in multiple-scale-lengths turbulence in plasmas
International Nuclear Information System (INIS)
Itoh, S.-I.; Yagi, M.; Kawasaki, M.; Kitazawa, A.
2002-02-01
The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Statistical nonlinear interactions between semi-micro and micro modes are first kept in the analysis as the drag, noise and drive. The nonlinear dynamics determines both the fluctuation levels and the cross field turbulent transport for the fixed global parameters. A quenching or suppressing effect is induced by their nonlinear interplay, even if both modes are unstable when analyzed independently. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. The thermal fluctuation of the scale length of λ D is assumed to be statistically independent. The hierarchical structure is constructed according to the scale lengths. Transitions in turbulence are found and phase diagrams with cusp type catastrophe are obtained. Dynamics is followed. Statistical properties of the subcritical excitation are discussed. The probability density function (PDF) and transition probability are obtained. Power-laws are obtained in the PDF as well as in the transition probability. Generalization for the case where turbulence is composed of three-classes of modes is also developed. A new catastrophe of turbulent sates is obtained. (author)
A model for AGN variability on multiple time-scales
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
Controlling Barium Sulphate Scale Deposition Problems in an unbleached Kraft Paper Mill
CSIR Research Space (South Africa)
Sithole, Bruce
2015-06-01
Full Text Available Troubleshooting of scale deposits and defects in paper samples showed that the problem was caused by barium sulphate and calcium sulphate scales. However, it was ascertained that barium sulphate was more of a concern than calcium sulphate...
Michael S. Mitchell; Scott H. Rutzmoser; T. Bently Wigley; Craig Loehle; John A. Gerwin; Patrick D. Keyser; Richard A. Lancia; Roger W. Perry; Christopher L. Reynolds; Ronald E. Thill; Robert Weih; Don White; Petra Bohall Wood
2006-01-01
Little is known about factors that structure biodiversity on landscape scales, yet current land management protocols, such as forest certification programs, place an increasing emphasis on managing for sustainable biodiversity at landscape scales. We used a replicated landscape study to evaluate relationships between forest structure and avian diversity at both stand...
Airfoil optimization for noise emission problem on small scale turbines
Energy Technology Data Exchange (ETDEWEB)
Gocmen, Tuhfe; Ozerdem, Baris [Mechanical Engineering Department, Yzmir Institute of Technology (Turkey)
2011-07-01
Wind power is a preferred natural resource and has had benefits for the energy industry and for the environment all over the world. However, noise emission from wind turbines is becoming a major concern today. This study paid close attention to small scale wind turbines close to urban areas and proposes an optimum number of six airfoils to address noise emission concerns and performance criteria. The optimization process aimed to decrease the noise emission levels and enhance the aerodynamic performance of a small scale wind turbine. This study determined the sources and the operating conditions of broadband noise emissions. A new design is presented which enhances aerodynamic performance and at the same time reduces airfoil self noise. It used popular aerodynamic functions and codes based on aero-acoustic empirical models. Through numerical computations and analyses, it is possible to derive useful improvements that can be made to commercial airfoils for small scale wind turbines.
Large-Scale Data for Multiple-View Stereopsis
DEFF Research Database (Denmark)
Aanæs, Henrik; Jensen, Rasmus Ramsbøl; Vogiatzis, George
2016-01-01
The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions...... that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken...... under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set...
Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods
DEFF Research Database (Denmark)
Capolei, Andrea; Jørgensen, John Bagterp
2012-01-01
of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....
Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant
Directory of Open Access Journals (Sweden)
Xinhao Jiang
2012-05-01
Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.
Scaling of chaotic multiplicity: A new observation in high-energy interactions
International Nuclear Information System (INIS)
Ghosh, D.; Ghosh, P.; Roy, J.
1990-01-01
We analyze high-energy-interaction data to study the dependence of chaotic multiplicity on the pseudorapidity window and propose a new scaling function bar Ψ(bar z)=left-angle n 1 right-angle/left-angle n right-angle max where left-angle n 1 right-angle is the chaotic multiplicity and bar z=left-angle n right-angle/left-angle n right-angle max is the reduced multiplicity, following the quantum-optical concept of particle production. It has been observed that the proposed ''chaotic multiplicity scaling'' is obeyed by pp, p bar p, and AA collisions at different available energies
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
PATTERN CLASSIFICATION APPROACHES TO MATCHING BUILDING POLYGONS AT MULTIPLE SCALES
Directory of Open Access Journals (Sweden)
X. Zhang
2012-07-01
Full Text Available Matching of building polygons with different levels of detail is crucial in the maintenance and quality assessment of multi-representation databases. Two general problems need to be addressed in the matching process: (1 Which criteria are suitable? (2 How to effectively combine different criteria to make decisions? This paper mainly focuses on the second issue and views data matching as a supervised pattern classification. Several classifiers (i.e. decision trees, Naive Bayes and support vector machines are evaluated for the matching task. Four criteria (i.e. position, size, shape and orientation are used to extract information for these classifiers. Evidence shows that these classifiers outperformed the weighted average approach.
Neural Computations in a Dynamical System with Multiple Time Scales
Directory of Open Access Journals (Sweden)
Yuanyuan Mi
2016-09-01
Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.
Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.
Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David
2017-08-29
Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl - ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl - concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl - distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.
Clutter-free Visualization of Large Point Symbols at Multiple Scales by Offset Quadtrees
Directory of Open Access Journals (Sweden)
ZHANG Xiang
2016-08-01
Full Text Available To address the cartographic problems in map mash-up applications in the Web 2.0 context, this paper studies a clutter-free technique for visualizing large symbols on Web maps. Basically, a quadtree is used to select one symbol in each grid cell at each zoom level. To resolve the symbol overlaps between neighboring quad-grids, multiple offsets are applied to the quadtree and a voting strategy is used to compute the significant level of symbols for their selection at multiple scales. The method is able to resolve spatial conflicts without explicit conflict detection, thus enabling a highly efficient processing. Also the resulting map forms a visual hierarchy of semantic importance. We discuss issues such as the relative importance, symbol-to-grid size ratio, and effective offset schemes, and propose two extensions to make better use of the free space available on the map. Experiments were carried out to validate the technique,which demonstrates its robustness and efficiency (a non-optimal implementation leads to a sub-second processing for datasets of a 105 magnitude.
Time scales and the problem of radioactive waste
International Nuclear Information System (INIS)
Goble, R.L.
1984-01-01
The author argues that decisions about future nuclear development can be made essentially independent of waste management considerations for the next 20 years. His arguments are based on five propositions: 1 Risks and costs of storing spent fuel or high-level waste and transuranics are lower than other directly comparable risks and costs of operating a reactor. 2 Storage of mill tailings is the most serious long-term waste problem; it is not serious enough to rule out the use of nuclear power. 3 There are compelling reasons for beginning to implement a waste management program now. 4 It is important to separate the problem of providing temporary storage from that of finding permanent repositories. 5 A prudent waste management strategy, by 2000, will have identified and evaluated more than enough repository space for the waste generated by that time, independent of the decision made about nuclear futures. 13 references, 4 figures, 4 tables
Solving the Single-Sink, Fixed-Charge, Multiple-Choice Transportation Problem by Dynamic Programming
DEFF Research Database (Denmark)
Rauff Lind Christensen, Tue; Klose, Andreas; Andersen, Kim Allan
important aspects of supplier selection, an important application of the SSFCTP, this does not reflect the real life situation. First, transportation costs faced by many companies are in fact piecewise linear. Secondly, when suppliers offer discounts, either incremental or all-unit discounts, such savings......The Single-Sink, Fixed-Charge, Multiple-Choice Transportation Problem (SSFCMCTP) is a problem with versatile applications. This problem is a generalization of the Single-Sink, Fixed-Charge Transportation Problem (SSFCTP), which has a fixed-charge, linear cost structure. However, in at least two...... are neglected in the SSFCTP. The SSFCMCTP overcome this problem by incorporating a staircase cost structure in the cost function instead of the usual one used in SSFCTP. We present a dynamic programming algorithm for the resulting problem. To enhance the performance of the generic algorithm a number...
Solving Minimal Covering Location Problems with Single and Multiple Node Coverage
Directory of Open Access Journals (Sweden)
Darko DRAKULIĆ
2016-12-01
Full Text Available Location science represents a very attractiveresearch field in combinatorial optimization and it is in expansion in last five decades. The main objective of location problems is determining the best position for facilities in a given set of nodes.Location science includes techniques for modelling problemsand methods for solving them. This paper presents results of solving two types of minimal covering location problems, with single and multiple node coverage, by using CPLEX optimizer and Particle Swarm Optimization method.
Multiple-scale stochastic processes: Decimation, averaging and beyond
Energy Technology Data Exchange (ETDEWEB)
Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)
2017-02-07
The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.
We characterized regional patterns of the tidal channel benthic diatom community and examined the relative importance of local wetland and surrounding landscape level factors measured at multiple scales in structuring this assemblage. Surrounding land cover was characterized at ...
A study of multiplicity scaling of particles produced in 16O-nucleus collisions
International Nuclear Information System (INIS)
Ahmad, N.
2015-01-01
Koba-Nielsen-Olesen (KNO) scaling has been a dominant framework to study the behaviour of multiplicity distribution of charged particles produced in high-energy hadronic collisions. Several workers have made attempt to investigate multiplicity distributions of particles produced in hadron-hadron (h-h), hadron-nucleus (h-A) and nucleus-nucleus (A-A) collisions at relativistic energies. Multiplicity distributions in p-nucleus interactions in emulsion experiments are found to be consistent with the KNO scaling. The applicability of the scaling of multiplicities was extended to FNL energies by earlier workers. Slattery has shown that KNO scaling is in agreement with the data on pp interactions over a wide-range of energies
Directory of Open Access Journals (Sweden)
Evan T. Curtis
2016-08-01
Full Text Available Eye-tracking methods have only rarely been used to examine the online cognitive processing that occurs during mental arithmetic on simple arithmetic problems, that is, addition and multiplication problems with single-digit operands (e.g., operands 2 through 9; 2 + 3, 6 x 8 and the inverse subtraction and division problems (e.g., 5 – 3; 48 ÷ 6. Participants (N = 109 solved arithmetic problems from one of the four operations while their eye movements were recorded. We found three unique fixation patterns. During addition and multiplication, participants allocated half of their fixations to the operator and one-quarter to each operand, independent of problem size. The pattern was similar on small subtraction and division problems. However, on large subtraction problems, fixations were distributed approximately evenly across the three stimulus components. On large division problems, over half of the fixations occurred on the left operand, with the rest distributed between the operation sign and the right operand. We discuss the relations between these eye tracking patterns and other research on the differences in processing across arithmetic operations.
Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.
Prevost, Luanna B; Lemons, Paula P
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. © 2016 L. B. Prevost and P. P. Lemons. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Energy Technology Data Exchange (ETDEWEB)
Kleinsmith, P E [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA)
1976-04-01
Multiple spatial scaling is incorporated in a modified form of the Bogoliubov plasma cluster expansion; then this proposed reformulation of the plasma weak-coupling approximation is used to derive, from the BBGKY Hierarchy, a decoupled set of equations for the one-and two-particle distribution functions in the limit as the plasma parameter goes to zero. Because the reformulated cluster expansion permits retention of essential two-particle collisional information in the limiting equations, while simultaneously retaining the well-established Debye-scale relative ordering of the correlation functions, decoupling of the Hierarchy is accomplished without introduction of the divergence problems encountered in the Bogoliubov theory, as is indicated by an exact solution of the limiting equations for the equilibrium case. To establish additional links with existing plasma equilibrium theories, the two-particle equilibrium correlation function is used to calculate the interaction energy and the equation of state. The limiting equation for the equilibrium three-particle correlation function is then developed, and a formal solution is obtained.
Multiple positive solutions for second order impulsive boundary value problems in Banach spaces
Directory of Open Access Journals (Sweden)
Zhi-Wei Lv
2010-06-01
Full Text Available By means of the fixed point index theory of strict set contraction operators, we establish new existence theorems on multiple positive solutions to a boundary value problem for second-order impulsive integro-differential equations with integral boundary conditions in a Banach space. Moreover, an application is given to illustrate the main result.
Exact Solutions to the Double Travelling Salesman Problem with Multiple Stacks
DEFF Research Database (Denmark)
Petersen, Hanne L.; Archetti, Claudia; Speranza, M. Grazia
2010-01-01
In this paper we present mathematical programming formulations and solution approaches for the optimal solution of the Double Travelling Salesman Problem with Multiple Stacks (DTSPMS). A set of orders is given, each one requiring transportation of one item from a customer in a pickup region...
Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.
Zhao, Ying; Shi, Luoyi
2017-01-01
This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.
Alimovic, S.
2013-01-01
Background: Children with multiple impairments have more complex developmental problems than children with a single impairment. Method: We compared children, aged 4 to 11 years, with intellectual disability (ID) and visual impairment to children with single ID, single visual impairment and typical development on "Child Behavior Check…
Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang
2016-01-01
This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...
Patterns of disturbance at multiple scales in real and simulated landscapes
Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo
2007-01-01
We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...
Institute of Scientific and Technical Information of China (English)
Feng Junwen
2006-01-01
To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.
On the multiple depots vehicle routing problem with heterogeneous fleet capacity and velocity
Hanum, F.; Hartono, A. P.; Bakhtiar, T.
2018-03-01
This current manuscript concerns with the optimization problem arising in a route determination of products distribution. The problem is formulated in the form of multiple depots and time windowed vehicle routing problem with heterogeneous capacity and velocity of fleet. Model includes a number of constraints such as route continuity, multiple depots availability and serving time in addition to generic constraints. In dealing with the unique feature of heterogeneous velocity, we generate a number of velocity profiles along the road segments, which then converted into traveling-time tables. An illustrative example of rice distribution among villages by bureau of logistics is provided. Exact approach is utilized to determine the optimal solution in term of vehicle routes and starting time of service.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Directory of Open Access Journals (Sweden)
Nawalany Marek
2015-09-01
Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Directory of Open Access Journals (Sweden)
Omar Abu Arqub
2014-01-01
Full Text Available The purpose of this paper is to present a new kind of analytical method, the so-called residual power series, to predict and represent the multiplicity of solutions to nonlinear boundary value problems of fractional order. The present method is capable of calculating all branches of solutions simultaneously, even if these multiple solutions are very close and thus rather difficult to distinguish even by numerical techniques. To verify the computational efficiency of the designed proposed technique, two nonlinear models are performed, one of them arises in mixed convection flows and the other one arises in heat transfer, which both admit multiple solutions. The results reveal that the method is very effective, straightforward, and powerful for formulating these multiple solutions.
Solution accelerators for large scale 3D electromagnetic inverse problems
International Nuclear Information System (INIS)
Newman, Gregory A.; Boggs, Paul T.
2004-01-01
We provide a framework for preconditioning nonlinear 3D electromagnetic inverse scattering problems using nonlinear conjugate gradient (NLCG) and limited memory (LM) quasi-Newton methods. Key to our approach is the use of an approximate adjoint method that allows for an economical approximation of the Hessian that is updated at each inversion iteration. Using this approximate Hessian as a preconditoner, we show that the preconditioned NLCG iteration converges significantly faster than the non-preconditioned iteration, as well as converging to a data misfit level below that observed for the non-preconditioned method. Similar conclusions are also observed for the LM iteration; preconditioned with the approximate Hessian, the LM iteration converges faster than the non-preconditioned version. At this time, however, we see little difference between the convergence performance of the preconditioned LM scheme and the preconditioned NLCG scheme. A possible reason for this outcome is the behavior of the line search within the LM iteration. It was anticipated that, near convergence, a step size of one would be approached, but what was observed, instead, were step lengths that were nowhere near one. We provide some insights into the reasons for this behavior and suggest further research that may improve the performance of the LM methods
Operational tools to build a multicriteria territorial risk scale with multiple stakeholders
International Nuclear Information System (INIS)
Cailloux, Olivier; Mayag, Brice; Meyer, Patrick; Mousseau, Vincent
2013-01-01
Evaluating and comparing the threats and vulnerabilities associated with territorial zones according to multiple criteria (industrial activity, population, etc.) can be a time-consuming task and often requires the participation of several stakeholders. Rather than a direct evaluation of these zones, building a risk assessment scale and using it in a formal procedure permits to automate the assessment and therefore to apply it in a repeated way and in large-scale contexts and, provided the chosen procedure and scale are accepted, to make it objective. One of the main difficulties of building such a formal evaluation procedure is to account for the multiple decision makers' preferences. The procedure used in this article, ELECTRE TRI, uses the performances of each territorial zone on multiple criteria, together with preferential parameters from multiple decision makers, to qualitatively assess their associated risk level. We also present operational tools in order to implement such a procedure in practice, and show their use on a detailed example
Directory of Open Access Journals (Sweden)
Theo J.H.M. Eggen
2010-01-01
Full Text Available Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has to be maximized, item compromise has to be minimized, and pool usage has to be optimized. In this paper, a multiple objectives method is developed to deal with both types of exposure problems. In this method, exposure control parameters based on observed exposure rates are implemented as weights for the information in the item selection procedure. The method does not need time consuming simulation studies, and it can be implemented conditional on ability level. The method is compared with Sympson Hetter method for exposure control, with the Progressive method and with alphastratified testing. The results show that the method is successful in dealing with both kinds of exposure problems.
Biopolitics problems of large-scale hydraulic engineering construction
International Nuclear Information System (INIS)
Romanenko, V.D.
1997-01-01
The XX century which will enter in a history as a century of large-scale hydraulic engineering constructions come to the finish. Only on the European continent 517 large reservoirs (more than 1000 million km 3 of water were detained, had been constructed for a period from 1901 till 1985. In the Danube basin a plenty for reservoirs of power stations, navigations, navigating sluices and other hydraulic engineering structures are constructed. Among them more than 40 especially large objects are located along the main bed of the river. A number of hydro-complexes such as Dnieper-Danube and Gabcikovo, Danube-Oder-Labe (project), Danube-Tissa, Danube-Adriatic Sea (project), Danube-Aegean Sea, Danube-Black Sea ones, are entered into operation or are in a stage of designing. Hydraulic engineering construction was especially heavily conducted in Ukraine. On its territory some large reservoirs on Dnieper and Yuzhny Bug were constructed, which have heavily changed the hydrological regime of the rivers. Summarised the results of river systems regulating in Ukraine one can be noted that more than 27 thousand ponds (3 km 3 per year), 1098 reservoirs of total volume 55 km 3 , 11 large channels of total length more than 2000 km and with productivity of 1000 m 2 /s have been created in Ukraine. Hydraulic engineering construction played an important role in development of the industry and agriculture, water-supply of the cities and settlements, in environmental effects, and maintenance of safe navigation in Danube, Dnieper and other rivers. In next part of the paper, the environmental changes after construction of the Karakum Channel on the Aral Sea in the Middle Asia are discussed
A branch-and-cut algorithm for the vehicle routing problem with multiple use of vehicles
Directory of Open Access Journals (Sweden)
İsmail Karaoğlan
2015-06-01
Full Text Available This paper addresses the vehicle routing problem with multiple use of vehicles (VRPMUV, an important variant of the classic vehicle routing problem (VRP. Unlike the classical VRP, vehicles are allowed to use more than one route in the VRPMUV. We propose a branch-and-cut algorithm for solving the VRPMUV. The proposed algorithm includes several valid inequalities from the literature for the purpose of improving its lower bounds, and a heuristic algorithm based on simulated annealing and a mixed integer programming-based intensification procedure for obtaining the upper bounds. The algorithm is evaluated in terms of the test problems derived from the literature. The computational results which follow show that, if there were 120 customers on the route (in the simulation, the problem would be solved optimally in a reasonable amount of time.
Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
A novel multi-item joint replenishment problem considering multiple type discounts.
Directory of Open Access Journals (Sweden)
Ligang Cui
Full Text Available In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP model, considering three discount (all-unit discount, incremental discount, total volume discount offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction.
[The suffering of professionals working at home with families with multiple problems].
Lamour, Martine; Barraco-De Pinto, Marthe
2015-01-01
The management of families with multiple problems often adversely affects the many people involved in their case. This suffering at work affects particularly professionals carrying out home visits. Acknowledging this suffering, enabling these professionals to express and give meaning to their feelings is essential in order to enable them to draw on their skills and creativity. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Scaling in multiplicity distributions of heavy, black and grey prongs in nuclear emulsions
International Nuclear Information System (INIS)
Nieminen, M.; Torsti, J.J.; Valtonen, E.
1979-01-01
The validity of Koba-Nielsen-Olesen scaling hypothesis was examined in the case of heavy, black, and grey prongs in proton-emulsion collisions ('heavy' means 'either black or grey'). The average multiplicities of these prongs were computed in the region 0.1-400 GeV for the nuclei C, N, O, S, Br, Ag, and I. After the inclusion of the energy-dependent excitation probability of the nuclei of the form P* = b 0 + b 1 ln E 0 into the model, experimental multiplicity distributions in the energy region 6-300 GeV agreed satisfactorily with the scaling hypothesis. The ratio of the dispersion D (D = √ 2 >- 2 ) to the average multiplicity in the scaling functions of heavy, balck, and grey prongs was estimated to be 0.86, 0.84, and 1.04, respectively, in the high energy region. (Auth.)
Scaling Professional Problems of Teachers in Turkey with Paired Comparison Method
Directory of Open Access Journals (Sweden)
Yasemin Duygu ESEN
2017-03-01
Full Text Available In this study, teachers’ professional problems was investigated and the significance level of them was measured with the paired comparison method. The study was carried out in survey model. The study group consisted of 484 teachers working in public schools which are accredited by Ministry of National Education (MEB in Turkey. “The Teacher Professional Problems Survey” developed by the researchers was used as a data collection tool. In data analysis , the scaling method with the third conditional equation of Thurstone’s law of comparative judgement was used. According to the results of study, the teachers’ professional problems include teacher training and the quality of teacher, employee rights and financial problems, decrease of professional reputation, the problems with MEB policies, the problems with union activities, workload, the problems with administration in school, physical conditions and the lack of infrastructure, the problems with parents, the problems with students. According to teachers, the most significant problem is MEB educational policies. This is followed by decrease of professional reputation, physical conditions and the lack of infrastructure, the problems with students, employee rights and financial problems, the problems with administration in school, teacher training and the quality of teacher, the problems with parents, workload, and the problems with union activities. When teachers’ professional problems were analyzed seniority variable, there was little difference in scale values. While the teachers with 0-10 years experience consider decrease of professional reputation as the most important problem, the teachers with 11-45 years experience put the problems with MEB policies at the first place.
Scaling of multiplicity distribution in hadron collisions and diffractive-excitation like models
International Nuclear Information System (INIS)
Buras, A.J.; Dethlefsen, J.M.; Koba, Z.
1974-01-01
Multiplicity distribution of secondary particles in inelastic hadron collision at high energy is studied in the semiclassical impact parameter representation. The scaling function is shown to consist of two factors: one geometrical and the other dynamical. We propose a specific choice of these factors, which describe satisfactorily the elastic scattering, the ratio of elastic to total cross-section and the simple scaling behaviour of multiplicity distribution in p-p collisions. Two versions of diffractive-excitation like models (global and local excitation) are presented as interpretation of our choice of dynamical factor. (author)
Yanti, Y. R.; Amin, S. M.; Sulaiman, R.
2018-01-01
This study described representation of students who have musical, logical-mathematic and naturalist intelligence in solving a problem. Subjects were selected on the basis of multiple intelligence tests (TPM) consists of 108 statements, with 102 statements adopted from Chislet and Chapman and 6 statements equal to eksistensial intelligences. Data were analyzed based on problem-solving tests (TPM) and interviewing. See the validity of the data then problem-solving tests (TPM) and interviewing is given twice with an analyzed using the representation indikator and the problem solving step. The results showed that: the stage of presenting information known, stage of devising a plan, and stage of carrying out the plan those three subjects were using same form of representation. While he stage of presenting information asked and stage of looking back, subject of logical-mathematic was using different forms of representation with subjects of musical and naturalist intelligence. From this research is expected to provide input to the teacher in determining the learning strategy that will be used by considering the representation of students with the basis of multiple intelligences.
Psidium guajava: A Single Plant for Multiple Health Problems of Rural Indian Population.
Daswani, Poonam G; Gholkar, Manasi S; Birdi, Tannaz J
2017-01-01
The rural population in India faces a number of health problems and often has to rely on local remedies. Psidium guajava Linn. (guava), a tropical plant which is used as food and medicine can be used by rural communities due to its several medicinal properties. A literature search was undertaken to gauge the rural health scenario in India and compile the available literature on guava so as to reflect its usage in the treatment of multiple health conditions prevalent in rural communities. Towards this, electronic databases such as Pubmed, Science Direct, google scholar were scanned. Information on clinical trials on guava was obtained from Cochrane Central Register of Controlled Trials and Clinicaltrial.gov. The literature survey revealed that guava possesses various medicinal properties which have been reported from across the globe in the form of ethnobotanical/ethnopharmacological surveys, laboratory investigations and clinical trials. Besides documenting the safety of guava, the available literature shows that guava is efficacious against the following conditions which rural communities would encounter. (a) Gastrointestinal infections; (b) Malaria; (c)Respiratory infections; (d) Oral/dental infections; (e) Skin infections; (f) Diabetes; (g) Cardiovascular/hypertension; (h) Cancer; (i) Malnutrition; (j) Women problems; (k) Pain; (l) Fever; (m) Liver problems; (n) Kidney problems. In addition, guava can also be useful for treatment of animals and explored for its commercial applications. In conclusion, popularization of guava, can have multiple applications for rural communities.
Solving the Single-Sink, Fixed-Charge, Multiple-Choice Transportation Problem by Dynamic Programming
DEFF Research Database (Denmark)
Christensen, Tue; Andersen, Kim Allan; Klose, Andreas
2013-01-01
This paper considers a minimum-cost network flow problem in a bipartite graph with a single sink. The transportation costs exhibit a staircase cost structure because such types of transportation cost functions are often found in practice. We present a dynamic programming algorithm for solving...... this so-called single-sink, fixed-charge, multiple-choice transportation problem exactly. The method exploits heuristics and lower bounds to peg binary variables, improve bounds on flow variables, and reduce the state-space variable. In this way, the dynamic programming method is able to solve large...... instances with up to 10,000 nodes and 10 different transportation modes in a few seconds, much less time than required by a widely used mixed-integer programming solver and other methods proposed in the literature for this problem....
High scale parity invariance as a solution to the SUSY CP problem ...
Indian Academy of Sciences (India)
scale SUSY ДК model provides a solution to the CP problems of the MSSM. A minimal version of this .... the renormalizable seesaw model so that К-parity conservation remains automatic. Pramana – J. Phys., Vol ... from the Planck scale to ЪК in the squark sector is to split the third generation squarks slightly from the first two ...
A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems
Directory of Open Access Journals (Sweden)
Xuhao Zhang
2014-01-01
Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.
Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales
Sadler, J.; Walthall, C. L.
2014-12-01
The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.
Rocchini, Duccio
2009-01-01
Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600
Small-scale fluctuations in the microwave background radiation and multiple gravitational lensing
International Nuclear Information System (INIS)
Kashlinsky, A.
1988-01-01
It is shown that multiple gravitational lensing of the microwave background radiation (MBR) by static compact objects significantly attenuates small-scale fluctuations in the MBR. Gravitational lensing, by altering trajectories of MBR photons reaching an observer, leads to (phase) mixing of photons from regions with different initial fluctuations. As a result of this diffusion process the original fluctuations are damped on scales up to several arcmin. An equation that describes this process and its general solution are given. It is concluded that the present upper limits on the amplitude of the MBR fluctuations on small scales cannot constrain theories of galaxy formation. 25 references
Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xu Mingji
2017-01-01
Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.
Multiple scales and singular limits for compressible rotating fluids with general initial data
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Novotný, A.
2014-01-01
Roč. 39, č. 6 (2014), s. 1104-1127 ISSN 0360-5302 Keywords : compressible Navier-Stokes equations * multiple scales * oscillatory integrals Subject RIV: BA - General Mathematics Impact factor: 1.013, year: 2014 http://www.tandfonline.com/doi/full/10.1080/03605302.2013.856917
Non-Abelian Kubo formula and the multiple time-scale method
International Nuclear Information System (INIS)
Zhang, X.; Li, J.
1996-01-01
The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.
Safety from physical viewpoint: ''two-risk model in multiple risk problem''
International Nuclear Information System (INIS)
Kuz'Min, I.I.; Akimov, V.A.
1998-01-01
Full text of publication follows: the problem of safety provision for people and environment within the framework of a certain socio-economic system (SES) as a problem of managing a great number of interacting risks characterizing numerous hazards (natural, manmade, social, economic once, etc.) inherent in the certain SES has been discussed. From the physical point of view, it can be considered a problem of interaction of many bodies which has no accurate mathematical solution even if the laws of interaction of this bodies are known. In physics, to solve this problem, an approach based on the reduction of the above-mentioned problem of the problem of two-body interaction which can be solved accurately in mathematics has been used. The report presents a similar approach to the problem of risk management in the SES. This approach includes the subdivision of numerous hazards inherent within the framework of the SES into two classes of hazards, so that each of the classes could be considered an integrated whole one, each of them being characterized by the appropriate risk. Consequently, problem of 'multiple-risk' management (i.e. the problem of many bodies, as represented in physics) can be reduced to the 'two-risk' management problem (that is, to the problem two-bodies). Within the framework of the two-risk model the optimization of costs to reduce the two kinds of risk, that is, the risk inherent in the SES as a whole, as well as the risk potentially provoked by lots of activities to be introduced in the SES economy has been described. The model has made it possible to formulate and prove the theorem of equilibrium in risk management. Using the theorem, a relatively simple and practically applicable procedure of optimizing the threshold costs to reduce diverse kinds of risk has been elaborated. The procedure provides to assess the minimum value of the cost that can be achieved regarding the socio-economic factors typical of the SES under discussion. The aimed
Grolet, Aurelien; Thouverez, Fabrice
2015-02-01
This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.
Directory of Open Access Journals (Sweden)
Pingping Chi
2013-03-01
Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.
Measuring floodplain spatial patterns using continuous surface metrics at multiple scales
Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.
2015-01-01
Interactions between fluvial processes and floodplain ecosystems occur upon a floodplain surface that is often physically complex. Spatial patterns in floodplain topography have only recently been quantified over multiple scales, and discrepancies exist in how floodplain surfaces are perceived to be spatially organised. We measured spatial patterns in floodplain topography for pool 9 of the Upper Mississippi River, USA, using moving window analyses of eight surface metrics applied to a 1 × 1 m2 DEM over multiple scales. The metrics used were Range, SD, Skewness, Kurtosis, CV, SDCURV,Rugosity, and Vol:Area, and window sizes ranged from 10 to 1000 m in radius. Surface metric values were highly variable across the floodplain and revealed a high degree of spatial organisation in floodplain topography. Moran's I correlograms fit to the landscape of each metric at each window size revealed that patchiness existed at nearly all window sizes, but the strength and scale of patchiness changed within window size, suggesting that multiple scales of patchiness and patch structure exist in the topography of this floodplain. Scale thresholds in the spatial patterns were observed, particularly between the 50 and 100 m window sizes for all surface metrics and between the 500 and 750 m window sizes for most metrics. These threshold scales are ~ 15–20% and 150% of the main channel width (1–2% and 10–15% of the floodplain width), respectively. These thresholds may be related to structuring processes operating across distinct scale ranges. By coupling surface metrics, multi-scale analyses, and correlograms, quantifying floodplain topographic complexity is possible in ways that should assist in clarifying how floodplain ecosystems are structured.
A multiple-scaling method of the computation of threaded structures
International Nuclear Information System (INIS)
Andrieux, S.; Leger, A.
1989-01-01
The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented
DEFF Research Database (Denmark)
Oervik, M. S.; Sejbaek, T.; Penner, I. K.
2017-01-01
Background Our objective was to validate the Danish translation of the Fatigue Scale for Motor and Cognitive Functions (FSMC) in multiple sclerosis (MS) patients. Materials and methods A Danish MS cohort (n = 84) was matched and compared to the original German validation cohort (n = 309) and a he......Background Our objective was to validate the Danish translation of the Fatigue Scale for Motor and Cognitive Functions (FSMC) in multiple sclerosis (MS) patients. Materials and methods A Danish MS cohort (n = 84) was matched and compared to the original German validation cohort (n = 309...... positive correlations between the two fatigue scales implied high convergent validity (total scores: r = 0.851, p gender). Correcting for depression did not result in any significant adjustments of the correlations...
Methods and scales in soil erosion studies in Spain: problems and perspectives
Energy Technology Data Exchange (ETDEWEB)
Garcia-Ruiz, J. M.
2009-07-01
Soil erosion is a major problem in some areas of Spain. Research groups have studied a variety of aspects of this problem indifferent environments, and at a range of scales using a diversity of methods, from piquettes and rainfall simulation to experimental plots, catchment and large regional areas. This has increased knowledge and identified the main problems: farmland abandonment, badlands erosion, the effects of land use changes, and the role of extreme events and erosion in certain crops (particularly vineyards). However, comparison of results among various research groups has been difficult, posing problems in developing solutions from State and Regional administrators. (Author) 73 refs.
Khoze, Valentin V.; Spannowsky, Michael
2018-01-01
We introduce and discuss two inter-related mechanisms operative in the electroweak sector of the Standard Model at high energies. Higgsplosion, the first mechanism, occurs at some critical energy in the 25 to 103 TeV range, and leads to an exponentially growing decay rate of highly energetic particles into multiple Higgs bosons. We argue that this is a well-controlled non-perturbative phenomenon in the Higgs-sector which involves the final state Higgs multiplicities n in the regime nλ ≫ 1 where λ is the Higgs self-coupling. If this mechanism is realised in nature, the cross-sections for producing ultra-high multiplicities of Higgs bosons are likely to become observable and even dominant in this energy range. At the same time, however, the apparent exponential growth of these cross-sections at even higher energies will be tamed and automatically cut-off by a related Higgspersion mechanism. As a result, and in contrast to previous studies, multi-Higgs production does not violate perturbative unitarity. Building on this approach, we then argue that the effects of Higgsplosion alter quantum corrections from very heavy states to the Higgs boson mass. Above a certain energy, which is much smaller than their masses, these states would rapidly decay into multiple Higgs bosons. The heavy states become unrealised as they decay much faster than they are formed. The loop integrals contributing to the Higgs mass will be cut off not by the masses of the heavy states, but by the characteristic loop momenta where their decay widths become comparable to their masses. Hence, the cut-off scale would be many orders of magnitude lower than the heavy mass scales themselves, thus suppressing their quantum corrections to the Higgs boson mass.
Petersen, Isaac T; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E; Pettit, Gregory S; Lansford, Jennifer E; Dodge, Kenneth A
2018-03-01
Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid for one age but not another. This study examines mean-level changes in internalizing problems across a long span of development at the same time as accounting for heterotypic continuity by using age-appropriate, changing measures. Internalizing problems from age 14-24 were studied longitudinally in a community sample (N = 585), using Achenbach's Youth Self-Report (YSR) and Young Adult Self-Report (YASR). Heterotypic continuity was evaluated with an item response theory (IRT) approach to vertical scaling, linking different measures over time to be on the same scale, as well as with a Thurstone scaling approach. With vertical scaling, internalizing problems peaked in mid-to-late adolescence and showed a group-level decrease from adolescence to early adulthood, a change that would not have been seen with the approach of using only age-common items. Individuals' trajectories were sometimes different than would have been seen with the common-items approach. Findings support the importance of considering heterotypic continuity when examining development and vertical scaling to account for heterotypic continuity with changing measures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
International Nuclear Information System (INIS)
Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.
1975-01-01
The multiplicity problem for tensor operators in U(3) has a unique (canonical) resolution which is utilized to effect the explicit construction of all U(3) Wigner and Racah coefficients. Methods are employed which elucidate the structure of the results; in particular, the significance of the denominator functions entering the structure of these coefficients, and the relation of these denominator functions to the null space of the canonical tensor operators. An interesting feature of the denominator functions is the appearance of new, group theoretical, polynomials exhibiting several remarkable and quite unexpected properties. (U.S.)
A multiple ship routing and speed optimization problem under time, cost and environmental objectives
DEFF Research Database (Denmark)
Wen, M.; Pacino, Dario; Kontovas, C.A.
2017-01-01
The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b......) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios....
Double evolutsional artificial bee colony algorithm for multiple traveling salesman problem
Directory of Open Access Journals (Sweden)
Xue Ming Hao
2016-01-01
Full Text Available The double evolutional artificial bee colony algorithm (DEABC is proposed for solving the single depot multiple traveling salesman problem (MTSP. The proposed DEABC algorithm, which takes advantage of the strength of the upgraded operators, is characterized by its guidance in exploitation search and diversity in exploration search. The double evolutional process for exploitation search is composed of two phases of half stochastic optimal search, and the diversity generating operator for exploration search is used for solutions which cannot be improved after limited times. The computational results demonstrated the superiority of our algorithm over previous state-of-the-art methods.
Large neighborhood search for the double traveling salesman problem with multiple stacks
Energy Technology Data Exchange (ETDEWEB)
Bent, Russell W [Los Alamos National Laboratory; Van Hentenryck, Pascal [BROWN UNIV
2009-01-01
This paper considers a complex real-life short-haul/long haul pickup and delivery application. The problem can be modeled as double traveling salesman problem (TSP) in which the pickups and the deliveries happen in the first and second TSPs respectively. Moreover, the application features multiple stacks in which the items must be stored and the pickups and deliveries must take place in reserve (LIFO) order for each stack. The goal is to minimize the total travel time satisfying these constraints. This paper presents a large neighborhood search (LNS) algorithm which improves the best-known results on 65% of the available instances and is always within 2% of the best-known solutions.
Statistical theory and transition in multiple-scale-lengths turbulence in plasmas
Energy Technology Data Exchange (ETDEWEB)
Itoh, Sanae-I. [Research Institute for Applied Mechanics, Kyushu Univ., Kasuga, Fukuoka (Japan); Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)
2001-06-01
The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Nonlinear interactions in the same kind of fluctuations as well as nonlinear interplay between different classes of fluctuations are kept in the analysis. Nonlinear interactions are modelled as turbulent drag, nonlinear noise and nonlinear drive, and a set of Langevin equations is formulated. With the help of an Ansatz of a large number of degrees of freedom with positive Lyapunov number, Langevin equations are solved and the fluctuation dissipation theorem in the presence of strong plasma turbulence has been derived. A case where two driving mechanisms (one for micro mode and the other for semi-micro mode) coexist is investigated. It is found that there are several states of fluctuations: in one state, the micro mode is excited and the semi-micro mode is quenched; in the other state, the semi-micro mode is excited, and the micro mode remains at finite but suppressed level. New type of turbulence transition is obtained, and a cusp type catastrophe is revealed. A phase diagram is drawn for turbulence which is composed of multiple classes of fluctuations. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. Finally, the nonlocal heat transport due to the long-wave-length fluctuations, which are noise-pumped by shorter-wave-length ones, is analyzed and the impact on transient transport problems is discussed. (author)
On the solution of a few problems of multiple scattering by Monte Carlo method
International Nuclear Information System (INIS)
Bluet, J.C.
1966-02-01
Three problems of multiple scattering arising from neutron cross sections experiments, are reported here. The common hypothesis are: - Elastic scattering is the only possible process - Angular distributions are isotropic - Losses of particle energy are negligible in successive collisions. In the three cases practical results, corresponding to actual experiments are given. Moreover the results are shown in more general way, using dimensionless variable such as the ratio of geometrical dimensions to neutron mean free path. The FORTRAN codes are given together with to the corresponding flow charts, and lexicons of symbols. First problem: Measurement of sodium capture cross-section. A sodium sample of given geometry is submitted to a neutron flux. Induced activity is then measured by means of a sodium iodide cristal. The distribution of active nuclei in the sample, and the counter efficiency are calculated by Monte-Carlo method taking multiple scattering into account. Second problem: absolute measurement of a neutron flux using a glass scintillator. The scintillator is a use of lithium 6 loaded glass, submitted to neutron flux perpendicular to its plane faces. If the glass thickness is not negligible compared with scattering mean free path λ, the mean path e' of neutrons in the glass is different from the thickness. Monte-Carlo calculation are made to compute this path and a relative correction to efficiency equal to (e' - e)/e. Third problem: study of a neutron collimator. A neutron detector is placed at the bottom of a cylinder surrounded with water. A neutron source is placed on the cylinder axis, in front of the water shield. The number of neutron tracks going directly and indirectly through the water from the source to the detector are counted. (author) [fr
Directory of Open Access Journals (Sweden)
Yu Zhou
2017-01-01
Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.
A location-routing problem model with multiple periods and fuzzy demands
Directory of Open Access Journals (Sweden)
Ali Nadizadeh
2014-08-01
Full Text Available This paper puts forward a dynamic capacitated location-routing problem with fuzzy demands (DCLRP-FD. It is given on input a set of identical vehicles (each having a capacity, a fixed cost and availability level, a set of depots with restricted capacities and opening costs, a set of customers with fuzzy demands, and a planning horizon with multiple periods. The problem consists of determining the depots to be opened only in the first period of the planning horizon, the customers and the vehicles to be assigned to each opened depot, and performing the routes that may be changed in each time period due to fuzzy demands. A fuzzy chance-constrained programming (FCCP model has been designed using credibility theory and a hybrid heuristic algorithm with four phases is presented in order to solve the problem. To obtain the best value of the fuzzy parameters of the model and show the influence of the availability level of vehicles on final solution, some computational experiments are carried out. The validity of the model is then evaluated in contrast with CLRP-FD's models in the literature. The results indicate that the model and the proposed algorithm are robust and could be used in real world problems.
Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design
Energy Technology Data Exchange (ETDEWEB)
Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC
2006-09-28
A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China
International Nuclear Information System (INIS)
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-01-01
Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China
Energy Technology Data Exchange (ETDEWEB)
Xu, Lilai, E-mail: llxu@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Gao, Peiqing, E-mail: peiqing15@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China); Cui, Shenghui, E-mail: shcui@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu, Chun, E-mail: xmhwlc@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China)
2013-06-15
Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to
International Nuclear Information System (INIS)
Kong, Xiangyong; Gao, Liqun; Ouyang, Haibin; Li, Steven
2015-01-01
In most research on redundancy allocation problem (RAP), the redundancy strategy for each subsystem is assumed to be predetermined and fixed. This paper focuses on a specific RAP with multiple strategy choices (RAP-MSC), in which both active redundancy and cold standby redundancy can be selected as an additional decision variable for individual subsystems. To do so, the component type, redundancy strategy and redundancy level for each subsystem should be chosen subject to the system constraints appropriately such that the system reliability is maximized. Meanwhile, imperfect switching for cold standby redundancy is considered and a k-Erlang distribution is introduced to model the time-to-failure component as well. Given the importance and complexity of RAP-MSC, we propose a new efficient simplified version of particle swarm optimization (SPSO) to solve such NP-hard problems. In this method, a new position updating scheme without velocity is presented with stochastic disturbance and a low probability. Moreover, it is compared with several well-known PSO variants and other state-of-the-art approaches in the literature to evaluate its performance. The experiment results demonstrate the superiority of SPSO as an alternative for solving the RAP-MSC. - Highlights: • A more realistic RAP form with multiple strategy choices is focused. • Redundancy strategies are to be selected rather than fixed in general RAP. • A new simplified particle swarm optimization is proposed. • Higher reliabilities are achieved than the state-of-the-art approaches.
Combining MCDA and risk analysis: dealing with scaling issues in the multiplicative AHP
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; van den Honert, Rob; Salling, Kim Bang
the progression factor 2 is used for calculating scores of alternatives and √2 for calculation of criteria weights when transforming the verbal judgments stemming from pair wise comparisons. However, depending on the decision context, the decision-makers aversion towards risk, etc., it is most likely......This paper proposes a new decision support system (DSS) for applying risk analysis and stochastic simulation to the multiplicative AHP in order to deal with issues concerning the progression factors. The multiplicative AHP makes use of direct rating on a logarithmic scale, and for this purpose...
Arancibia-Martini, Héctor; Ruiz, Miguel Á; Blanco, Amalio; Cárdenas, Manuel
2016-04-01
Given the current debate over the distinction between subtle and blatant prejudice, this study provides new evidence regarding problems with the construct validity of the Pettigrew and Meertens' Blatant and Subtle Prejudice Scale. To assess these issues, an existing data sample of 896 Chilean participants collected in 2010 was reanalyzed. The main analysis method used was a confirmatory factor analysis. The model that best represented the original theory (a model of two correlated second-order factors) had an improper solution due to the unidentified model. The scale has substantial psychometric problems, and it was not possible to distinguish between subtle and blatant prejudice. © The Author(s) 2016.
Small Scale Problems of the ΛCDM Model: A Short Review
Directory of Open Access Journals (Sweden)
Antonino Del Popolo
2017-02-01
Full Text Available The ΛCDM model, or concordance cosmology, as it is often called, is a paradigm at its maturity. It is clearly able to describe the universe at large scale, even if some issues remain open, such as the cosmological constant problem, the small-scale problems in galaxy formation, or the unexplained anomalies in the CMB. ΛCDM clearly shows difficulty at small scales, which could be related to our scant understanding, from the nature of dark matter to that of gravity; or to the role of baryon physics, which is not well understood and implemented in simulation codes or in semi-analytic models. At this stage, it is of fundamental importance to understand whether the problems encountered by the ΛDCM model are a sign of its limits or a sign of our failures in getting the finer details right. In the present paper, we will review the small-scale problems of the ΛCDM model, and we will discuss the proposed solutions and to what extent they are able to give us a theory accurately describing the phenomena in the complete range of scale of the observed universe.
Multiple time scale analysis of pressure oscillations in solid rocket motors
Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan
2018-03-01
In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.
Flood statistics of simple and multiple scaling; Invarianza di scala del regime di piena
Energy Technology Data Exchange (ETDEWEB)
Rosso, Renzo; Mancini, Marco; Burlando, Paolo; De Michele, Carlo [Milan, Politecnico Univ. (Italy). DIIAR; Brath, Armando [Bologna, Univ. (Italy). DISTART
1996-09-01
The variability of flood probabilities throughout the river network is investigated by introducing the concepts of simple and multiple scaling. Flood statistics and quantiles as parametrized by drainage area are considered, and a distributed geomorphoclimatic model is used to analyze in detail their scaling properties for two river basins in Thyrrhenian Liguria (North-Western Italy). Although temporal storm precipitation and spatial runoff production are not scaling, the resulting flood flows do not display substantial deviations from statistical self-similarity or simple scaling. This result has a wide potential in order to assess the concept of hydrological homogeneity, also indicating a new route towards establishing physically-based procedures for flood frequency regionalization.
Katul, Gabriel; Liu, Heping
2017-02-01
A large corpus of field and laboratory experiments support the finding that the water side transfer velocity kL of sparingly soluble gases near air-water interfaces scales as kL˜(νɛ)1/4, where ν is the kinematic water viscosity and ɛ is the mean turbulent kinetic energy dissipation rate. Originally predicted from surface renewal theory, this scaling appears to hold for marine and coastal systems and across many environmental conditions. It is shown that multiple approaches to representing the effects of turbulence on kL lead to this expression when the Kolmogorov microscale is assumed to be the most efficient transporting eddy near the interface. The approaches considered range from simplified surface renewal schemes with distinct models for renewal durations, scaling and dimensional considerations, and a new structure function approach derived using analogies between scalar and momentum transfer. The work offers a new perspective as to why the aforementioned 1/4 scaling is robust.
The Problem Behaviour Check List: a short scale to assess challenging behaviours
Tyrer, PJ; Nagar, J; Evans, R; Oliver, P; Bassett, P; Liedtka, N; Tarabi, A
2016-01-01
Background Challenging behaviour, especially in intellectual disability, covers a wide range that is in need of further evaluation. Aims To develop a short but comprehensive instrument for all aspects of challenging behaviour. Method In the first part of a two-stage enquiry, a 28-item scale was constructed to examine the components of challenging behaviour. Following a simple factor analysis this was developed further to create a new short scale, the Problem Behaviour Checklist (PBCL). The sc...
Planck-scale physics and solutions to the strong CP-problem without axion
International Nuclear Information System (INIS)
Berezhiani, Z.G.; Mohapatra, R.N.; Senjanovic, G.
1992-12-01
We analyse the impact of quantum gravity on the possible solutions to the strong CP problem which utilize the spontaneously broken discrete symmetries, such as parity and time reversal invariance. We find that the stability of the solution under Planck scale effects provides an upper limit on the scale Λ of relevant symmetry breaking. This result is mode dependent and the bound is most restrictive for the seesaw type models of fermion masses, with Λ 6 GeV. (author). 32 refs
Ergul, Ozgur
2014-01-01
The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red
Schmengler, A. C.; Vlek, P. L. G.
2012-04-01
study has shown that the use of multiple methods facilitates the calibration and validation of models and might provide a more accurate measure for soil erosion rates in ungauged catchments. Moreover, the approach could be used to identify the most appropriate working and operational scales for soil erosion modelling.
Dekorvin, Andre
1989-01-01
The main purpose is to develop a theory for multiple knowledge systems. A knowledge system could be a sensor or an expert system, but it must specialize in one feature. The problem is that we have an exhaustive list of possible answers to some query (such as what object is it). By collecting different feature values, in principle, it should be possible to give an answer to the query, or at least narrow down the list. Since a sensor, or for that matter an expert system, does not in most cases yield a precise value for the feature, uncertainty must be built into the model. Also, researchers must have a formal mechanism to be able to put the information together. Researchers chose to use the Dempster-Shafer approach to handle the problems mentioned above. Researchers introduce the concept of a state of recognition and point out that there is a relation between receiving updates and defining a set valued Markov Chain. Also, deciding what the value of the next set valued variable is can be phrased in terms of classical decision making theory such as minimizing the maximum regret. Other related problems are examined.
Lead Selenide Nanostructures Self-Assembled across Multiple Length Scales and Dimensions
Directory of Open Access Journals (Sweden)
Evan K. Wujcik
2016-01-01
Full Text Available A self-assembly approach to lead selenide (PbSe structures that have organized across multiple length scales and multiple dimensions has been achieved. These structures consist of angstrom-scale 0D PbSe crystals, synthesized via a hot solution process, which have stacked into 1D nanorods via aligned dipoles. These 1D nanorods have arranged into nanoscale 2D sheets via directional short-ranged attraction. The nanoscale 2D sheets then further aligned into larger 2D microscale planes. In this study, the authors have characterized the PbSe structures via normal and cryo-TEM and EDX showing that this multiscale multidimensional self-assembled alignment is not due to drying effects. These PbSe structures hold promise for applications in advanced materials—particularly electronic technologies, where alignment can aid in device performance.
Solving a large-scale precedence constrained scheduling problem with elastic jobs using tabu search
DEFF Research Database (Denmark)
Pedersen, C.R.; Rasmussen, R.V.; Andersen, Kim Allan
2007-01-01
exploitation of the elastic jobs and solve the problem using a tabu search procedure. Finding an initial feasible solution is in general -complete, but the tabu search procedure includes a specialized heuristic for solving this problem. The solution method has proven to be very efficient and leads......This paper presents a solution method for minimizing makespan of a practical large-scale scheduling problem with elastic jobs. The jobs are processed on three servers and restricted by precedence constraints, time windows and capacity limitations. We derive a new method for approximating the server...... to a significant decrease in makespan compared to the strategy currently implemented....
Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems
DEFF Research Database (Denmark)
Elden, Lars; Hansen, Per Christian; Rojas, Marielba
2003-01-01
The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...
Solving a large-scale precedence constrained scheduling problem with elastic jobs using tabu search
DEFF Research Database (Denmark)
Pedersen, C.R.; Rasmussen, R.V.; Andersen, Kim Allan
2007-01-01
This paper presents a solution method for minimizing makespan of a practical large-scale scheduling problem with elastic jobs. The jobs are processed on three servers and restricted by precedence constraints, time windows and capacity limitations. We derive a new method for approximating the server...... exploitation of the elastic jobs and solve the problem using a tabu search procedure. Finding an initial feasible solution is in general -complete, but the tabu search procedure includes a specialized heuristic for solving this problem. The solution method has proven to be very efficient and leads...
Scaling laws governing the multiple scattering of diatomic molecules under Coulomb explosion
International Nuclear Information System (INIS)
Sigmund, P.
1992-01-01
The trajectories of fast molecules during and after penetration through foils are governed by Coulomb explosion and distorted by multiple scattering and other penetration phenomena. A scattering event may cause the energy available for Coulomb explosion to increase or decrease, and angular momentum may be transferred to the molecule. Because of continuing Coulomb explosion inside and outside the target foil, the transmission pattern recorded at a detector far away from the target is not just a linear superposition of Coulomb explosion and multiple scattering. The velocity distribution of an initially monochromatic and well-collimated, but randomly oriented, beam of molecular ions is governed by a generalization of the standard Bothe-Landau integral that governs the multiple scattering of atomic ions. Emphasis has been laid on the distribution in relative velocity and, in particular, relative energy. The statistical distributions governing the longitudinal motion (i.e., the relative motion along the molecular axis) and the rotational motion can be scaled into standard multiple-scattering distributions of atomic ions. The two scaling laws are very different. For thin target foils, the significance of rotational energy transfer is enhanced by an order of magnitude compared to switched-off Coulomb explosion. A distribution for the total relative energy (i.e., longitudinal plus rotational motion) has also been found, but its scaling behavior is more complex. Explicit examples given for all three distributions refer to power-law scattering. As a first approximation, scattering events undergone by the two atoms in the molecule were assumed uncorrelated. A separate section has been devoted to an estimate of the effect of impact-parameter correlation on the multiple scattering of penetrating molecules
Ostoja, Steven M.; Schupp, Eugene W.; Klinger, Rob
2013-01-01
multiple scales. Associational effects provide a useful theoretical basis for better understanding harvester ant foraging decisions. These results demonstrate the importance of ecological context for seed removal, which has implications for seed pools, plant populations and communities.
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-06-01
Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.
Alleyne, Emma; Gannon, Theresa A; Ó Ciardha, Caoilte; Wood, Jane L
2014-02-01
The literature on Multiple Perpetrator Rape (MPR) is scant; however, a significant proportion of sexual offending involves multiple perpetrators. In addition to the need for research with apprehended offenders of MPR, there is also a need to conduct research with members of the general public. Recent advances in the forensic literature have led to the development of self-report proclivity scales. These scales have enabled researchers to conduct evaluative studies sampling from members of the general public who may be perpetrators of sexual offenses and have remained undetected, or at highest risk of engaging in sexual offending. The current study describes the development and preliminary validation of the Multiple-Perpetrator Rape Interest Scale (M-PRIS), a vignette-based measure assessing community males' sexual arousal to MPR, behavioral propensity toward MPR and enjoyment of MPR. The findings show that the M-PRIS is a reliable measure of community males' sexual interest in MPR with high internal reliability and temporal stability. In a sample of university males we found that a large proportion (66%) did not emphatically reject an interest in MPR. We also found that rape-supportive cognitive distortions, antisocial attitudes, and high-risk sexual fantasies were predictors of sexual interest in MPR. We discuss these findings and the implications for further research employing proclivity measures referencing theory development and clinical practice.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
Relaxing the weak scale: A new approach to the hierarchy problem
CERN. Geneva
2015-01-01
Recently, a new mechanism to generate a naturally small electroweak scale has been proposed. This is based on the idea that a dynamical evolution during the early universe can drive the Higgs mass to a value much smaller than the UV cutoff of the SM. In this talk I will present this idea, its explicit realizations, potential problems, and experimental consequences.
A note on solving large-scale zero-one programming problems
Adema, Jos J.
1988-01-01
A heuristic for solving large-scale zero-one programming problems is provided. The heuristic is based on the modifications made by H. Crowder et al. (1983) to the standard branch-and-bound strategy. First, the initialization is modified. The modification is only useful if the objective function
Large scale inverse problems computational methods and applications in the earth sciences
Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan
2013-01-01
This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.
Psychometric properties of the Fatigue Assessment Scale (FAS) in women with breast problems
de Vries, J.; van der Steeg, A.F.; Roukema, J.A.
2010-01-01
To examine the usefulness of the Fatigue Assessment Scale (FAS) in women with benign breast problems (BBP) and women with early stage breast cancer (BC). Women with a palpable lump in the breast or an abnormality on a screening mammography (N = 560) completed the FAS (four time points) and measures
Convergence speed of consensus problems over undirected scale-free networks
International Nuclear Information System (INIS)
Sun Wei; Dou Li-Hua
2010-01-01
Scale-free networks and consensus behaviour among multiple agents have both attracted much attention. To investigate the consensus speed over scale-free networks is the major topic of the present work. A novel method is developed to construct scale-free networks due to their remarkable power-law degree distributions, while preserving the diversity of network topologies. The time cost or iterations for networks to reach a certain level of consensus is discussed, considering the influence from power-law parameters. They are both demonstrated to be reversed power-law functions of the algebraic connectivity, which is viewed as a measurement on convergence speed of the consensus behaviour. The attempts of tuning power-law parameters may speed up the consensus procedure, but it could also make the network less robust over time delay at the same time. Large scale of simulations are supportive to the conclusions. (general)
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Chen, Wenbin; Hendrix, William; Samatova, Nagiza F
2017-12-01
The problem of aligning multiple metabolic pathways is one of very challenging problems in computational biology. A metabolic pathway consists of three types of entities: reactions, compounds, and enzymes. Based on similarities between enzymes, Tohsato et al. gave an algorithm for aligning multiple metabolic pathways. However, the algorithm given by Tohsato et al. neglects the similarities among reactions, compounds, enzymes, and pathway topology. How to design algorithms for the alignment problem of multiple metabolic pathways based on the similarity of reactions, compounds, and enzymes? It is a difficult computational problem. In this article, we propose an algorithm for the problem of aligning multiple metabolic pathways based on the similarities among reactions, compounds, enzymes, and pathway topology. First, we compute a weight between each pair of like entities in different input pathways based on the entities' similarity score and topological structure using Ay et al.'s methods. We then construct a weighted k-partite graph for the reactions, compounds, and enzymes. We extract a mapping between these entities by solving the maximum-weighted k-partite matching problem by applying a novel heuristic algorithm. By analyzing the alignment results of multiple pathways in different organisms, we show that the alignments found by our algorithm correctly identify common subnetworks among multiple pathways.
$H^\\infty$ control of systems with multiple I/O delays via decomposition to adobe problems
Meinsma, Gjerrit; Mirkin, Leonid
In this paper, the standard (four-block) $H^\\infty$ control problem for systems with multiple input-output delays in the feedback loop is studied. The central idea is to see the multiple delay operator as a special series connection of elementary delay operators, called the adobe delay operators.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Directory of Open Access Journals (Sweden)
Zhanzhong Wang
2018-01-01
Full Text Available The key of realizing the cross docking is to design the joint of inbound trucks and outbound trucks, so a proper sequence of trucks will make the cross-docking system much more efficient and need less makespan. A cross-docking system is proposed with multiple receiving and shipping dock doors. The objective is to find the best door assignments and the sequences of trucks in the principle of products distribution to minimize the total makespan of cross docking. To solve the problem that is regarded as a mixed integer linear programming (MILP model, three metaheuristics, namely, harmony search (HS, improved harmony search (IHS, and genetic algorithm (GA, are proposed. Furthermore, the fixed parameters are optimized by Taguchi experiments to improve the accuracy of solutions further. Finally, several numerical examples are put forward to evaluate the performances of proposed algorithms.
Multiple time scales in modeling the incidence of infections acquired in intensive care units
Directory of Open Access Journals (Sweden)
Martin Wolkewitz
2016-09-01
Full Text Available Abstract Background When patients are admitted to an intensive care unit (ICU their risk of getting an infection will be highly depend on the length of stay at-risk in the ICU. In addition, risk of infection is likely to vary over calendar time as a result of fluctuations in the prevalence of the pathogen on the ward. Hence risk of infection is expected to depend on two time scales (time in ICU and calendar time as well as competing events (discharge or death and their spatial location. The purpose of this paper is to develop and apply appropriate statistical models for the risk of ICU-acquired infection accounting for multiple time scales, competing risks and the spatial clustering of the data. Methods A multi-center data base from a Spanish surveillance network was used to study the occurrence of an infection due to Methicillin-resistant Staphylococcus aureus (MRSA. The analysis included 84,843 patient admissions between January 2006 and December 2011 from 81 ICUs. Stratified Cox models were used to study multiple time scales while accounting for spatial clustering of the data (patients within ICUs and for death or discharge as competing events for MRSA infection. Results Both time scales, time in ICU and calendar time, are highly associated with the MRSA hazard rate and cumulative risk. When using only one basic time scale, the interpretation and magnitude of several patient-individual risk factors differed. Risk factors concerning the severity of illness were more pronounced when using only calendar time. These differences disappeared when using both time scales simultaneously. Conclusions The time-dependent dynamics of infections is complex and should be studied with models allowing for multiple time scales. For patient individual risk-factors we recommend stratified Cox regression models for competing events with ICU time as the basic time scale and calendar time as a covariate. The inclusion of calendar time and stratification by ICU
International Nuclear Information System (INIS)
Ghaffary, Tooraj
2016-01-01
By the use of data from the annihilation process of electron-positron in AMY detector at 60 GeV center of mass energy, charged particles multiplicity distribution is obtained and fitted with the KNO scaling. Then, momentum spectra of charged particles and momentum distribution with respect to the jet axis are obtained, and the results are compared to the different models of QCD; also, the distribution of fragmentation functions and scaling violations are studied. It is being expected that the scaling violations of the fragmentation functions of gluon jets are stronger than the quark ones. One of the reasons for such case is that splitting function of quarks is larger than splitting function of gluon.
Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms
Hasanov, Khalid
2014-03-04
© 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.
Manley, D.J.; Johnston, Ron; Jones, Kelvyn
2018-01-01
There has been a growing appreciation that the processes generating urban residential segregation operate at multiple scales, stimulating innovations into the measurement of their outcomes. This paper applies a multi‐level modelling approach to that issue to the situation in Auckland, where multiple
Genetic structuring of northern myotis (Myotis septentrionalis) at multiple spatial scales
Johnson, Joshua B.; Roberts, James H.; King, Timothy L.; Edwards, John W.; Ford, W. Mark; Ray, David A.
2014-01-01
Although groups of bats may be genetically distinguishable at large spatial scales, the effects of forest disturbances, particularly permanent land use conversions on fine-scale population structure and gene flow of summer aggregations of philopatric bat species are less clear. We genotyped and analyzed variation at 10 nuclear DNA microsatellite markers in 182 individuals of the forest-dwelling northern myotis (Myotis septentrionalis) at multiple spatial scales, from within first-order watersheds scaling up to larger regional areas in West Virginia and New York. Our results indicate that groups of northern myotis were genetically indistinguishable at any spatial scale we considered, and the collective population maintained high genetic diversity. It is likely that the ability to migrate, exploit small forest patches, and use networks of mating sites located throughout the Appalachian Mountains, Interior Highlands, and elsewhere in the hibernation range have allowed northern myotis to maintain high genetic diversity and gene flow regardless of forest disturbances at local and regional spatial scales. A consequence of maintaining high gene flow might be the potential to minimize genetic founder effects following population declines caused currently by the enzootic White-nose Syndrome.
Directory of Open Access Journals (Sweden)
Tuba Aydogdu Iskenderoglu
2018-04-01
Full Text Available It is important for pre-service teachers to know the conceptual difficulties they have experienced regarding the concepts of multiplication and division in fractions and problem posing is a way to learn these conceptual difficulties. Problem posing is a synthetic activity that fundamentally has multiple answers. The purpose of this study is to analyze the multiplication and division of fractions problems posed by pre-service elementary mathematics teachers and to investigate how the problems posed change according to the year of study the pre-service teachers are in. The study employed developmental research methods. A total of 213 pre-service teachers enrolled in different years of the Elementary Mathematics Teaching program at a state university in Turkey took part in the study. The “Problem Posing Test” was used as the data collecting tool. In this test, there are 3 multiplication and 3 division operations. The data were analyzed using qualitative descriptive analysis. The findings suggest that, regardless of the year, pre-service teachers had more conceptual difficulties in problem posing about the division of fractions than in problem posing about the multiplication of fractions.
Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.
Directory of Open Access Journals (Sweden)
Lauren C Ng
Full Text Available This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R. Qualitative free listing (n = 74 and key informant interviews (n = 47 identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.
Energy Technology Data Exchange (ETDEWEB)
Malhotra, M. [Stanford Univ., CA (United States)
1996-12-31
Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
van der Heide, D. C.; van der Putten, A. A. J.; van den Berg, P. B.; Taxis, K.; Vlaskamp, C.
Persons with profound intellectual and multiple disabilities (PIMD) suffer from a wide range of health problems and use a wide range of different drugs. This study investigated for frequently used medication whether there was a health problem documented in the medical notes for the drug prescribed.
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Hierarchy problem, gauge coupling unification at the Planck scale, and vacuum stability
Directory of Open Access Journals (Sweden)
Naoyuki Haba
2015-11-01
Full Text Available From the point of view of the gauge hierarchy problem, introducing an intermediate scale in addition to TeV scale and the Planck scale (MPl=2.4×1018 GeV is unfavorable. In that way, a gauge coupling unification (GCU is expected to be realized at MPl. We explore possibilities of GCU at MPl by adding a few extra particles with TeV scale mass into the standard model (SM. When extra particles are fermions and scalars (only fermions with the same mass, the GCU at MPl can (not be realized. On the other hand, when extra fermions have different masses, the GCU can be realized around 8πMPl without extra scalars. This simple SM extension has two advantages that a vacuum becomes stable up to MPl (8πMPl and a proton lifetime becomes much longer than an experimental bound.
International Nuclear Information System (INIS)
Hao Yinghang; Gong, Yubing; Wang Li; Ma Xiaoguang; Yang Chuanlu
2011-01-01
Research highlights: → Single synchronization transition for gap-junctional coupling. → Multiple synchronization transitions for chemical synaptic coupling. → Gap junctions and chemical synapses have different impacts on synchronization transition. → Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.
Energy Technology Data Exchange (ETDEWEB)
Hao Yinghang [School of Physics, Ludong University, Yantai 264025 (China); Gong, Yubing, E-mail: gongyubing09@hotmail.co [School of Physics, Ludong University, Yantai 264025 (China); Wang Li; Ma Xiaoguang; Yang Chuanlu [School of Physics, Ludong University, Yantai 264025 (China)
2011-04-15
Research highlights: Single synchronization transition for gap-junctional coupling. Multiple synchronization transitions for chemical synaptic coupling. Gap junctions and chemical synapses have different impacts on synchronization transition. Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.
The function of communities in protein interaction networks at multiple scales
Directory of Open Access Journals (Sweden)
Jones Nick S
2010-07-01
Full Text Available Abstract Background If biology is modular then clusters, or communities, of proteins derived using only protein interaction network structure should define protein modules with similar biological roles. We investigate the link between biological modules and network communities in yeast and its relationship to the scale at which we probe the network. Results Our results demonstrate that the functional homogeneity of communities depends on the scale selected, and that almost all proteins lie in a functionally homogeneous community at some scale. We judge functional homogeneity using a novel test and three independent characterizations of protein function, and find a high degree of overlap between these measures. We show that a high mean clustering coefficient of a community can be used to identify those that are functionally homogeneous. By tracing the community membership of a protein through multiple scales we demonstrate how our approach could be useful to biologists focusing on a particular protein. Conclusions We show that there is no one scale of interest in the community structure of the yeast protein interaction network, but we can identify the range of resolution parameters that yield the most functionally coherent communities, and predict which communities are most likely to be functionally homogeneous.
Skin and scales of teleost fish: Simple structure but high performance and multiple functions
Vernerey, Franck J.; Barthelat, Francois
2014-08-01
Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.
International Nuclear Information System (INIS)
Liu, Chen; Wang, Jiang; Wang, Lin; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok
2014-01-01
Highlights: • Synchronization transitions in hybrid scale-free neuronal networks are investigated. • Multiple synchronization transitions can be induced by the time delay. • Effect of synchronization transitions depends on the ratio of the electrical and chemical synapses. • Coupling strength and the density of inter-neuronal links can enhance the synchronization. -- Abstract: The impacts of information transmission delay on the synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses are investigated. Numerical results show that multiple appearances of synchronization regions transitions can be induced by different information transmission delays. With the time delay increasing, the synchronization of neuronal activities can be enhanced or destroyed, irrespective of the probability of chemical synapses in the whole hybrid neuronal network. In particular, for larger probability of electrical synapses, the regions of synchronous activities appear broader with stronger synchronization ability of electrical synapses compared with chemical ones. Moreover, it can be found that increasing the coupling strength can promote synchronization monotonously, playing the similar role of the increasing the probability of the electrical synapses. Interestingly, the structures and parameters of the scale-free neuronal networks, especially the structural evolvement plays a more subtle role in the synchronization transitions. In the network formation process, it is found that every new vertex is attached to the more old vertices already present in the network, the more synchronous activities will be emerge
Efficiency scale and technological change in credit unions and multiple banks using the COSIF
Directory of Open Access Journals (Sweden)
Wanderson Rocha Bittencourt
2016-08-01
Full Text Available The modernization of the financial intermediation process and adapting to new technologies, brought adjustments to operational processes, providing the reduction of information borrowing costs, allowing generate greater customer satisfaction, due to increased competitiveness in addition to making gains with long efficiency period. In this context, this research aims to analyze the evolution in scale and technological efficiency of credit and multiple cooperative banks from 2009 to 2013. We used the method of Data Envelopment Analysis - DEA, which allows to calculate the change in efficiency of institutions through the Malmquist Index. The results indicated that institutions that employ larger volumes of assets in the composition of its resources presented evolution in scale and technological efficiency, influencing the change in total factor productivity. It should be noticed that cooperatives had, in some years, advances in technology and scale efficiency higher than banks. However, this result can be explained by the fact that the average efficiency of credit unions have been lower than that of banks in the analyzed sample, indicating that there is greater need to improve internal processes by cooperatives, compared to multiple banks surveyed.
MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES
Directory of Open Access Journals (Sweden)
Y. Di
2017-05-01
Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems
Frohne, Jörg
2015-08-06
© 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.
Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems
Frohne, Jö rg; Heister, Timo; Bangerth, Wolfgang
2015-01-01
© 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.
Wynia, Klaske; Roodbol, Petrie F.; Middel, Berry
People with Multiple Sclerosis (MS) perceive consequences of this chronic condition that are not limited to impairments in physical functioning but also have their impact on limitations in activities and restrictions in participation in life situations. There is a growing awareness among healthcare
Large-Scale Parallel Finite Element Analysis of the Stress Singular Problems
International Nuclear Information System (INIS)
Noriyuki Kushida; Hiroshi Okuda; Genki Yagawa
2002-01-01
In this paper, the convergence behavior of large-scale parallel finite element method for the stress singular problems was investigated. The convergence behavior of iterative solvers depends on the efficiency of the pre-conditioners. However, efficiency of pre-conditioners may be influenced by the domain decomposition that is necessary for parallel FEM. In this study the following results were obtained: Conjugate gradient method without preconditioning and the diagonal scaling preconditioned conjugate gradient method were not influenced by the domain decomposition as expected. symmetric successive over relaxation method preconditioned conjugate gradient method converged 6% faster as maximum if the stress singular area was contained in one sub-domain. (authors)
Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.
Kalenichenko, V. V.
A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.
FDTD method for laser absorption in metals for large scale problems.
Deng, Chun; Ki, Hyungson
2013-10-21
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.
Structural problems of public participation in large-scale projects with environmental impact
International Nuclear Information System (INIS)
Bechmann, G.
1989-01-01
Four items are discussed showing that the problems involved through participation of the public in large-scale projects with environmental impact cannot be solved satisfactorily without suitable modification of the existing legal framework. The problematic items are: the status of the electric utilities as a quasi public enterprise; informal preliminary negotiations; the penetration of scientific argumentation into administrative decisions; the procedural concept. The paper discusses the fundamental issue of the problem-adequate design of the procedure and develops suggestions for a cooperative participation design. (orig./HSCH) [de
Fitzpatrick, Stephanie L; Hill-Briggs, Felicia
2015-10-01
Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.
Liu, Yupeng; Wu, Jianguo; Yu, Deyong; Hao, Ruifang
2018-06-01
China's rapid economic growth during the past three decades has resulted in a number of environmental problems, including the deterioration of air quality. It is necessary to better understand how the spatial pattern of air pollutants varies with time scales and what drive these changes. To address these questions, this study focused on one of the most heavily air-polluted areas in North China. We first quantified the spatial pattern of air pollution, and then systematically examined the relationships of air pollution to several socioeconomic and climatic factors using the constraint line method, correlation analysis, and stepwise regression on decadal, annual, and seasonal scales. Our results indicate that PM2.5 was the dominant air pollutant in the Beijing-Tianjin-Hebei region, while PM2.5 and PM10 were both important pollutants in the Agro-pastoral Transitional Zone (APTZ) region. Our statistical analyses suggest that energy consumption and gross domestic product (GDP) in the industry were the most important factors for air pollution on the decadal scale, but the impacts of climatic factors could also be significant. On the annual and seasonal scales, high wind speed, low relative humidity, and long sunshine duration constrained PM2.5 accumulation; low wind speed and high relative humidity constrained PM10 accumulation; and short sunshine duration and high wind speed constrained O3 accumulation. Our study showed that analyses on multiple temporal scales are not only necessary to determine key drivers of air pollution, but also insightful for understanding the spatial patterns of air pollution, which was important for urban planning and air pollution control.
Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales
Peschel, J.; Young, S. N.
2017-12-01
The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.
Lissner, Tabea; Reusser, Dominik
2015-04-01
Inadequate access to water is already a problem in many regions of the world and processes of global change are expected to further exacerbate the situation. Many aspects determine the adequacy of water resources: beside actual physical water stress, where the resource itself is limited, economic and social water stress can be experienced if access to resource is limited by inadequate infrastructure, political or financial constraints. To assess the adequacy of water availability for human use, integrated approaches are needed that allow to view the multiple determinants in conjunction and provide sound results as a basis for informed decisions. This contribution proposes two parts of an integrated approach to look at the multiple dimensions of water scarcity at regional to global scale. These were developed in a joint project with the German Development Agency (GIZ). It first outlines the AHEAD approach to measure Adequate Human livelihood conditions for wEll-being And Development, implemented at global scale and at national resolution. This first approach allows viewing impacts of climate change, e.g. changes in water availability, within the wider context of AHEAD conditions. A specific focus lies on the uncertainties in projections of climate change and future water availability. As adequate water access is not determined by water availability alone, in a second step we develop an approach to assess the water requirements for different sectors in more detail, including aspects of quantity, quality as well as access, in an integrated way. This more detailed approach is exemplified at region-scale in Indonesia and South Africa. Our results show that in many regions of the world, water scarcity is a limitation to AHEAD conditions in many countries, regardless of differing modelling output. The more detailed assessments highlight the relevance of additional aspects to assess the adequacy of water for human use, showing that in many regions, quality and
SSB of Scale Symmetry, Fermion Families and Quintessence without the Long-Range Force Problem
Guendelman, E. I.; Kaganovich, A. B.
We study a scale-invariant two measures theory where a dilaton field φ has no explicit potentials. The scale transformations include the translation of a dilaton φ-->φ+ const. The theory demonstrates a new mechanism for generation of the exponential potential: in the conformal Einstein frame (CEF), after SSB of scale invariance, the theory develops the exponential potential and, in general, the nonlinear kinetic term is generated as well. The scale symmetry does not allow the appearance of terms breaking the exponential shape of the potential that solves the problem of the flatness of the scalar field potential in the context of quintessential scenarios. As examples, two different possibilities for the choice of the dimensionless parameters are presented where the theory permits to get interesting cosmological results. For the first choice, the theory has standard scaling solutions for φ usually used in the context of the quintessential scenario. For the second choice, the theory allows three different solutions, one of which is a scaling solution with equation of state pφ=wρφ where w is predicted to be restricted by -1
Effective field theory analysis on μ problem in low-scale gauge mediation
International Nuclear Information System (INIS)
Zheng Sibo
2012-01-01
Supersymmetric models based on the scenario of gauge mediation often suffer from the well-known μ problem. In this paper, we reconsider this problem in low-scale gauge mediation in terms of effective field theory analysis. In this paradigm, all high energy input soft mass can be expressed via loop expansions. If the corrections coming from messenger thresholds are small, as we assume in this letter, then all RG evaluations can be taken as linearly approximation for low-scale supersymmetric breaking. Due to these observations, the parameter space can be systematically classified and studied after constraints coming from electro-weak symmetry breaking are imposed. We find that some old proposals in the literature are reproduced, and two new classes are uncovered. We refer to a microscopic model, where the specific relations among coefficients in one of the new classes are well motivated. Also, we discuss some primary phenomenologies.
A Large Scale Problem Based Learning inter-European Student Satellite Construction Project
DEFF Research Database (Denmark)
Nielsen, Jens Frederik Dalsgaard; Alminde, Lars; Bisgaard, Morten
2006-01-01
that electronic communication technology was vital within the project. Additionally the SSETI EXPRESS project implied the following problems it didn’t fit to a standard semester - 18 months for the satellite project compared to 5/6 months for a “normal” semester project. difficulties in integrating the tasks......A LARGE SCALE PROBLEM BASED LEARNING INTER-EUROPEAN STUDENT SATELLITE CONSTRUCTION PROJECT This paper describes the pedagogical outcome of a large scale PBL experiment. ESA (European Space Agency) Education Office launched January 2004 an ambitious project: Let students from all over Europe build....... The satellite was successfully launched on October 27th 2005 (http://www.express.space.aau.dk). The project was a student driven project with student project responsibility adding at lot of international experiences and project management skills to the outcome of more traditional one semester, single group...
Social and economic burden of walking and mobility problems in multiple sclerosis
Directory of Open Access Journals (Sweden)
Pike James
2012-09-01
Full Text Available Abstract Background Multiple sclerosis (MS is a chronic progressive neurological disease and the majority of patients will experience some degree of impaired mobility. We evaluated the prevalence, severity and burden of walking and mobility problems (WMPs in 5 European countries. Methods This was a cross-sectional, patient record-based study involving 340 neurologists who completed detailed patient record forms (PRF for patients (>18 years attending their clinic with MS. Patients were also invited to complete a questionnaire (PSC. Information collected included demographics, disease characteristics, work productivity, quality of life (QoL; EuroQol-5D and Hamburg Quality of Life Questionnaire Multiple Sclerosis [HAQUAMS] and mobility (subjective patient-reported and objectively measured using the timed 25 foot walk test [T25FW]. Relationships between WMPs and disease and other characteristics were examined using Chi square tests. Analysis of variance was used to examine relationships between mobility measures and work productivity. Results Records were available for 3572 patients of whom 2171 also completed a PSC. WMPs were regarded as the most bothersome symptom by almost half of patients who responded (43%; 291/683. There was a clear, independent and strong directional relationship between severity of WMPs (subjective and objective and healthcare resource utilisation. Patients with longer T25FW times (indicating greater walking impairment were significantly more likely to require additional caregiver support (p Conclusions In Europe, WMPs in MS represent a considerable personal and social burden both financially and in terms of quality of life. Interventions to improve mobility could have significant benefits for patients and society as a whole.
Error analysis of dimensionless scaling experiments with multiple points using linear regression
International Nuclear Information System (INIS)
Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.
2010-01-01
A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)
A Multiple-Item Scale for Assessing E-Government Service Quality
Papadomichelaki, Xenia; Mentzas, Gregoris
A critical element in the evolution of e-governmental services is the development of sites that better serve the citizens’ needs. To deliver superior service quality, we must first understand how citizens perceive and evaluate online citizen service. This involves defining what e-government service quality is, identifying its underlying dimensions, and determining how it can be conceptualized and measured. In this article we conceptualise an e-government service quality model (e-GovQual) and then we develop, refine, validate, confirm and test a multiple-item scale for measuring e-government service quality for public administration sites where citizens seek either information or services.
Dynamical properties of the growing continuum using multiple-scale method
Directory of Open Access Journals (Sweden)
Hynčík L.
2008-12-01
Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.
Fission time-scale in experiments and in multiple initiation model
Energy Technology Data Exchange (ETDEWEB)
Karamian, S. A., E-mail: karamian@nrmail.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)
2011-12-15
Rate of fission for highly-excited nuclei is affected by the viscose character of the systemmotion in deformation coordinates as was reported for very heavy nuclei with Z{sub C} > 90. The long time-scale of fission can be described in a model of 'fission by diffusion' that includes an assumption of the overdamped diabatic motion. The fission-to-spallation ratio at intermediate proton energy could be influenced by the viscosity, as well. Within a novel approach of the present work, the cross examination of the fission probability, time-scales, and pre-fission neutron multiplicities is resulted in the consistent interpretation of a whole set of the observables. Earlier, different aspects could be reproduced in partial simulations without careful coordination.
Garnier, Aurélie; Pennekamp, Frank; Lemoine, Mélissa; Petchey, Owen L
2017-12-01
Global environmental change has negative impacts on ecological systems, impacting the stable provision of functions, goods, and services. Whereas effects of individual environmental changes (e.g. temperature change or change in resource availability) are reasonably well understood, we lack information about if and how multiple changes interact. We examined interactions among four types of environmental disturbance (temperature, nutrient ratio, carbon enrichment, and light) in a fully factorial design using a microbial aquatic ecosystem and observed responses of dissolved oxygen saturation at three temporal scales (resistance, resilience, and return time). We tested whether multiple disturbances combine in a dominant, additive, or interactive fashion, and compared the predictability of dissolved oxygen across scales. Carbon enrichment and shading reduced oxygen concentration in the short term (i.e. resistance); although no other effects or interactions were statistically significant, resistance decreased as the number of disturbances increased. In the medium term, only enrichment accelerated recovery, but none of the other effects (including interactions) were significant. In the long term, enrichment and shading lengthened return times, and we found significant two-way synergistic interactions between disturbances. The best performing model (dominant, additive, or interactive) depended on the temporal scale of response. In the short term (i.e. for resistance), the dominance model predicted resistance of dissolved oxygen best, due to a large effect of carbon enrichment, whereas none of the models could predict the medium term (i.e. resilience). The long-term response was best predicted by models including interactions among disturbances. Our results indicate the importance of accounting for the temporal scale of responses when researching the effects of environmental disturbances on ecosystems. © 2017 The Authors. Global Change Biology Published by John Wiley
A multiple-time-scale approach to the control of ITBs on JET
Energy Technology Data Exchange (ETDEWEB)
Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)
2005-07-01
The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)
A multiple-time-scale approach to the control of ITBs on JET
International Nuclear Information System (INIS)
Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.
2005-01-01
The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)
Directory of Open Access Journals (Sweden)
Shem D Unger
Full Text Available Conservation genetics is a powerful tool to assess the population structure of species and provides a framework for informing management of freshwater ecosystems. As lotic habitats become fragmented, the need to assess gene flow for species of conservation management becomes a priority. The eastern hellbender (Cryptobranchus alleganiensis alleganiensis is a large, fully aquatic paedamorphic salamander. Many populations are experiencing declines throughout their geographic range, yet the genetic ramifications of these declines are currently unknown. To this end, we examined levels of genetic variation and genetic structure at both range-wide and drainage (hierarchical scales. We collected 1,203 individuals from 77 rivers throughout nine states from June 2007 to August 2011. Levels of genetic diversity were relatively high among all sampling locations. We detected significant genetic structure across populations (Fst values ranged from 0.001 between rivers within a single watershed to 0.218 between states. We identified two genetically differentiated groups at the range-wide scale: 1 the Ohio River drainage and 2 the Tennessee River drainage. An analysis of molecular variance (AMOVA based on landscape-scale sampling of basins within the Tennessee River drainage revealed the majority of genetic variation (∼94-98% occurs within rivers. Eastern hellbenders show a strong pattern of isolation by stream distance (IBSD at the drainage level. Understanding levels of genetic variation and differentiation at multiple spatial and biological scales will enable natural resource managers to make more informed decisions and plan effective conservation strategies for cryptic, lotic species.
Seeing the forest through the trees: Considering roost-site selection at multiple spatial scales
Jachowski, David S.; Rota, Christopher T.; Dobony, Christopher A.; Ford, W. Mark; Edwards, John W.
2016-01-01
Conservation of bat species is one of the most daunting wildlife conservation challenges in North America, requiring detailed knowledge about their ecology to guide conservation efforts. Outside of the hibernating season, bats in temperate forest environments spend their diurnal time in day-roosts. In addition to simple shelter, summer roost availability is as critical as maternity sites and maintaining social group contact. To date, a major focus of bat conservation has concentrated on conserving individual roost sites, with comparatively less focus on the role that broader habitat conditions contribute towards roost-site selection. We evaluated roost-site selection by a northern population of federally-endangered Indiana bats (Myotis sodalis) at Fort Drum Military Installation in New York, USA at three different spatial scales: landscape, forest stand, and individual tree level. During 2007–2011, we radiotracked 33 Indiana bats (10 males, 23 females) and located 348 roosting events in 116 unique roost trees. At the landscape scale, bat roost-site selection was positively associated with northern mixed forest, increased slope, and greater distance from human development. At the stand scale, we observed subtle differences in roost site selection based on sex and season, but roost selection was generally positively associated with larger stands with a higher basal area, larger tree diameter, and a greater sugar maple (Acer saccharum) component. We observed no distinct trends of roosts being near high-quality foraging areas of water and forest edges. At the tree scale, roosts were typically in American elm (Ulmus americana) or sugar maple of large diameter (>30 cm) of moderate decay with loose bark. Collectively, our results highlight the importance of considering day roost needs simultaneously across multiple spatial scales. Size and decay class of individual roosts are key ecological attributes for the Indiana bat, however, larger-scale stand structural
Validation of the Turkish version of the problem areas in diabetes scale
DEFF Research Database (Denmark)
Huis In 't Veld, Elisabeth M J; Makine, Ceylan; Nouwen, Arie
2011-01-01
was conducted among 154 patients with insulin-naïve type 2 diabetes. Participants completed the PAID, Centre for Epidemiological Studies Depression Scale (CES-D), Insulin Treatment Appraisal Scale (ITAS), and World Health Organization-Five Well-Being Index (WHO-5) questionnaires. Exploratory factor analyses...... yielded a 2-factor structure, identifying a 15-item "diabetes distress" factor and a 5-item "support-related issues" factor. The total PAID-score and the two dimensions were associated with higher levels of depression and poor emotional well-being. In the present study, the Turkish version of the PAID had......The Problem Areas in Diabetes (PAID) scale is a widely used self-report measure that can facilitate detection of diabetes-specific emotional distress in clinical practice. The aim of this study was to assess the factor structure and validity of the Turkish version of the PAID. A validation study...
[Spanish adaptation of the "Mobile Phone Problem Use Scale" for adolescent population].
López-Fernández, Olatz; Honrubia-Serrano, Ma Luisa; Freixa-Blanxart, Montserrat
2012-01-01
Problematic use of the mobile telephone is an emerging phenomenon in our society, and one which particularly affects the teenage population. Knowledge from research on the problematic use of this technology is necessary, since such use can give rise to a behavioural pattern with addictive characteristics. There are hardly any scales for measuring possible problematic use of mobile phones, and none at all adapted exclusively for the Spanish adolescent population. The scale most widely used internationally is the Mobile Phone Problem Use Scale (MPPUS). The aim of the present study is to adapt the MPPUS for use with Spanish adolescents. The Spanish version of the questionnaire was administered to a sample of 1132 adolescents aged 12 to 18. Reliability and factorial validity were comparable to those obtained in adult population, so that the measure of problematic mobile phone use in Spanish teenagers is one-dimensional. A prevalence of 14.8% of problematic users was detected.
FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization
Directory of Open Access Journals (Sweden)
Nguyen Trung Kien
2017-01-01
Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.
Kahle, L R; Kulka, R A; Klingel, D M
1980-09-01
This article reports the results of a study that annually monitored the self-esteem and interpersonal problems of over 100 boys during their sophomore, junior, and senior years of high school. Cross-lagged panel correlation differences show that low self-esteem leads to interpersonal problems in all three time lags when multiple interpersonal problems constitute the dependent variable but not when single interpersonal problem criteria constitute the dependent variable. These results are interpreted as supporting social-adaptation theory rather than self-perception theory. Implications for the conceptual status of personality variables as causal antecedents and for the assessment of individual differences are discussed.
Seesaw induced electroweak scale, the hierarchy problem and sub-eV neutrino masses
International Nuclear Information System (INIS)
Atwood, D.; Bar-Shalom, S.; Soni, A.
2006-01-01
We describe a model for the scalar sector where all interactions occur either at an ultra-high scale, Λ U ∝10 16 -10 19 GeV, or at an intermediate scale, Λ I =10 9 -10 11 GeV. The interaction of physics on these two scales results in an SU(2) Higgs condensate at the electroweak (EW) scale, Λ EW , through a seesaw-like Higgs mechanism, Λ EW ∝Λ I 2 /Λ U , while the breaking of the SM SU(2) x U(1) gauge symmetry occurs at the intermediate scale Λ I . The EW scale is, therefore, not fundamental but is naturally generated in terms of ultra-high energy phenomena and so the hierarchy problem is alleviated. We show that the class of such ''seesaw Higgs'' models predict the existence of sub-eV neutrino masses which are generated through a ''two-step'' seesaw mechanism in terms of the same two ultra-high scales: m ν ∝Λ I 4 /Λ U 3 ∝Λ EW 2 /Λ U . The neutrinos can be either Dirac or Majorana, depending on the structure of the scalar potential. We also show that our seesaw Higgs model can be naturally embedded in theories with tiny extra dimensions of size R∝Λ U -1 ∝10 -16 fm, where the seesaw induced EW scale arises from a violation of a symmetry at a distant brane; in particular, in the scenario presented there are seven tiny extra dimensions. (orig.)
Learmonth, Yvonne C; Motl, Robert W; Sandroff, Brian M; Pula, John H; Cadavid, Diego
2013-04-25
The Patient Determined Disease Steps (PDDS) is a promising patient-reported outcome (PRO) of disability in multiple sclerosis (MS). To date, there is limited evidence regarding the validity of PDDS scores, despite its sound conceptual development and broad inclusion in MS research. This study examined the validity of the PDDS based on (1) the association with Expanded Disability Status Scale (EDSS) scores and (2) the pattern of associations between PDDS and EDSS scores with Functional System (FS) scores as well as ambulatory and other outcomes. 96 persons with MS provided demographic/clinical information, completed the PDDS and other PROs including the Multiple Sclerosis Walking Scale-12 (MSWS-12), and underwent a neurological examination for generating FS and EDSS scores. Participants completed assessments of cognition, ambulation including the 6-minute walk (6 MW), and wore an accelerometer during waking hours over seven days. There was a strong correlation between EDSS and PDDS scores (ρ = .783). PDDS and EDSS scores were strongly correlated with Pyramidal (ρ = .578 &ρ = .647, respectively) and Cerebellar (ρ = .501 &ρ = .528, respectively) FS scores as well as 6 MW distance (ρ = .704 &ρ = .805, respectively), MSWS-12 scores (ρ = .801 &ρ = .729, respectively), and accelerometer steps/day (ρ = -.740 &ρ = -.717, respectively). This study provides novel evidence supporting the PDDS as valid PRO of disability in MS.
Scaling of charged particle multiplicity in Pb-Pb collisions at SPS energies
Abreu, M C; Alexa, C; Arnaldi, R; Ataian, M R; Baglin, C; Baldit, A; Bedjidian, Marc; Beolè, S; Boldea, V; Bordalo, P; Borges, G; Bussière, A; Capelli, L; Castanier, C; Castor, J I; Chaurand, B; Chevrot, I; Cheynis, B; Chiavassa, E; Cicalò, C; Claudino, T; Comets, M P; Constans, N; Constantinescu, S; Cortese, P; De Falco, A; De Marco, N; Dellacasa, G; Devaux, A; Dita, S; Drapier, O; Ducroux, L; Espagnon, B; Fargeix, J; Force, P; Gallio, M; Gavrilov, Yu K; Gerschel, C; Giubellino, P; Golubeva, M B; Gonin, M; Grigorian, A A; Grigorian, S; Grossiord, J Y; Guber, F F; Guichard, A; Gulkanian, H R; Hakobyan, R S; Haroutunian, R; Idzik, M; Jouan, D; Karavitcheva, T L; Kluberg, L; Kurepin, A B; Le Bornec, Y; Lourenço, C; Macciotta, P; MacCormick, M; Marzari-Chiesa, A; Masera, M; Masoni, A; Monteno, M; Musso, A; Petiau, P; Piccotti, A; Pizzi, J R; Prado da Silva, W L; Prino, F; Puddu, G; Quintans, C; Ramello, L; Ramos, S; Rato-Mendes, P; Riccati, L; Romana, A; Santos, H; Saturnini, P; Scalas, E; Scomparin, E; Serci, S; Shahoyan, R; Sigaudo, F; Silva, S; Sitta, M; Sonderegger, P; Tarrago, X; Topilskaya, N S; Usai, G L; Vercellin, Ermanno; Villatte, L; Willis, N
2002-01-01
The charged particle multiplicity distribution $dN_{ch}/d\\eta$ has been measured by the NA50 experiment in Pb--Pb collisions at the CERN SPS. Measurements were done at incident energies of 40 and 158 GeV per nucleon over a broad impact parameter range. The multiplicity distributions are studied as a function of centrality using the number of participating nucleons ($N_{part}$), or the number of binary nucleon--nucleon collisions ($N_{coll}$). Their values at midrapidity exhibit a power law scaling behaviour given by $N_{part}^{1.00}$ and $N_{coll}^{0.75}$ at 158 GeV. Compatible results are found for the scaling behaviour at 40 GeV. The width of the $dN_{ch}/d\\eta$ distributions is larger at 158 than at 40 GeV/nucleon and decreases slightly with centrality at both energies. Our results are compared to similar studies performed by other experiments both at the CERN SPS and at RHIC.}
Cut-off scaling and multiplicative reformalization in the theory of critical phenomena
International Nuclear Information System (INIS)
Forgacs, G.; Solyom, J.; Zawadowski, A.
1976-03-01
In the paper a new method to study the critical fluctuations in systems of 4-epsilon dimensions around the phase transition point is developed. This method unifies the Kadanoff scaling hypothesis as formulated by Wilson by help of his renormalization group technique and the simple mathematical structure of the Lie equations of the Gell-Mann-Low multiplicative renormalization. The basic idea of the new method is that a change in the physical cut-off can be compensated by an effective coupling in such a way that the Green's function and vertex in the original and transformed system differ only by a multiplicative factor. The critical indices, the anomalous dimensions and the critical exponent describing the correction to scaling are determined to second order in epsilon. The specific heat exponent is also calculated, in four dimensions the effect of fluctuations appears in the form of logarithmic corrections. In the last sections the new method is compared to other ones and the differences are discussed. (Sz.N.Z.)
Managing multiple roles: development of the Work-Family Conciliation Strategies Scale.
Matias, Marisa; Fontaine, Anne Marie
2014-07-17
Juggling the demands of work and family is becoming increasingly difficult in today's world. As dual-earners are now a majority and men and women's roles in both the workplace and at home have changed, questions have been raised regarding how individuals and couples can balance family and work. Nevertheless, research addressing work-family conciliation strategies is limited to a conflict-driven approach and context-specific instruments are scarce. This study develops an instrument for assessing how dual-earners manage their multiple roles detaching from a conflict point of view highlighting the work-family conciliation strategies put forward by these couples. Through qualitative and quantitative procedures the Work-Family Conciliation Strategies Scales was developed and is composed by 5 factors: Couple Coping; Positive Attitude Towards Multiple Roles, Planning and Management Skills, Professional Adjustments and Institutional Support; with good adjustment [χ2/df = 1.22; CFI = .90, RMSEA = .04, SRMR = .08.] and good reliability coefficients [from .67 to .87]. The developed scale contributes to research because of its specificity to the work-family framework and its focus on the proactive nature of balancing work and family roles. The results support further use of this instrument.
Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward
2017-03-01
Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.
Multiple-scale structures: from Faraday waves to soft-matter quasicrystals
Directory of Open Access Journals (Sweden)
Samuel Savitz
2018-05-01
Full Text Available For many years, quasicrystals were observed only as solid-state metallic alloys, yet current research is now actively exploring their formation in a variety of soft materials, including systems of macromolecules, nanoparticles and colloids. Much effort is being invested in understanding the thermodynamic properties of these soft-matter quasicrystals in order to predict and possibly control the structures that form, and hopefully to shed light on the broader yet unresolved general questions of quasicrystal formation and stability. Moreover, the ability to control the self-assembly of soft quasicrystals may contribute to the development of novel photonics or other applications based on self-assembled metamaterials. Here a path is followed, leading to quantitative stability predictions, that starts with a model developed two decades ago to treat the formation of multiple-scale quasiperiodic Faraday waves (standing wave patterns in vibrating fluid surfaces and which was later mapped onto systems of soft particles, interacting via multiple-scale pair potentials. The article reviews, and substantially expands, the quantitative predictions of these models, while correcting a few discrepancies in earlier calculations, and presents new analytical methods for treating the models. In so doing, a number of new stable quasicrystalline structures are found with octagonal, octadecagonal and higher-order symmetries, some of which may, it is hoped, be observed in future experiments.
Multiple-scale structures: from Faraday waves to soft-matter quasicrystals.
Savitz, Samuel; Babadi, Mehrtash; Lifshitz, Ron
2018-05-01
For many years, quasicrystals were observed only as solid-state metallic alloys, yet current research is now actively exploring their formation in a variety of soft materials, including systems of macromolecules, nanoparticles and colloids. Much effort is being invested in understanding the thermodynamic properties of these soft-matter quasicrystals in order to predict and possibly control the structures that form, and hopefully to shed light on the broader yet unresolved general questions of quasicrystal formation and stability. Moreover, the ability to control the self-assembly of soft quasicrystals may contribute to the development of novel photonics or other applications based on self-assembled metamaterials. Here a path is followed, leading to quantitative stability predictions, that starts with a model developed two decades ago to treat the formation of multiple-scale quasiperiodic Faraday waves (standing wave patterns in vibrating fluid surfaces) and which was later mapped onto systems of soft particles, interacting via multiple-scale pair potentials. The article reviews, and substantially expands, the quantitative predictions of these models, while correcting a few discrepancies in earlier calculations, and presents new analytical methods for treating the models. In so doing, a number of new stable quasicrystalline structures are found with octagonal, octadecagonal and higher-order symmetries, some of which may, it is hoped, be observed in future experiments.
A Spatial Framework to Map Heat Health Risks at Multiple Scales.
Ho, Hung Chak; Knudby, Anders; Huang, Wei
2015-12-18
In the last few decades extreme heat events have led to substantial excess mortality, most dramatically in Central Europe in 2003, in Russia in 2010, and even in typically cool locations such as Vancouver, Canada, in 2009. Heat-related morbidity and mortality is expected to increase over the coming centuries as the result of climate-driven global increases in the severity and frequency of extreme heat events. Spatial information on heat exposure and population vulnerability may be combined to map the areas of highest risk and focus mitigation efforts there. However, a mismatch in spatial resolution between heat exposure and vulnerability data can cause spatial scale issues such as the Modifiable Areal Unit Problem (MAUP). We used a raster-based model to integrate heat exposure and vulnerability data in a multi-criteria decision analysis, and compared it to the traditional vector-based model. We then used the Getis-Ord G(i) index to generate spatially smoothed heat risk hotspot maps from fine to coarse spatial scales. The raster-based model allowed production of maps at spatial resolution, more description of local-scale heat risk variability, and identification of heat-risk areas not identified with the vector-based approach. Spatial smoothing with the Getis-Ord G(i) index produced heat risk hotspots from local to regional spatial scale. The approach is a framework for reducing spatial scale issues in future heat risk mapping, and for identifying heat risk hotspots at spatial scales ranging from the block-level to the municipality level.
A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations
Omelchenko, Yuri
2007-04-01
A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.
Solving the three-body Coulomb breakup problem using exterior complex scaling
Energy Technology Data Exchange (ETDEWEB)
McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.
2004-05-17
Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish the formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.
Directory of Open Access Journals (Sweden)
Andrew Siefert
Full Text Available Despite increasing evidence of the importance of intraspecific trait variation in plant communities, its role in community trait responses to environmental variation, particularly along broad-scale climatic gradients, is poorly understood. We analyzed functional trait variation among early-successional herbaceous plant communities (old fields across a 1200-km latitudinal extent in eastern North America, focusing on four traits: vegetative height, leaf area, specific leaf area (SLA, and leaf dry matter content (LDMC. We determined the contributions of species turnover and intraspecific variation to between-site functional dissimilarity at multiple spatial scales and community trait responses to edaphic and climatic factors. Among-site variation in community mean trait values and community trait responses to the environment were generated by a combination of species turnover and intraspecific variation, with species turnover making a greater contribution for all traits. The relative importance of intraspecific variation decreased with increasing geographic and environmental distance between sites for SLA and leaf area. Intraspecific variation was most important for responses of vegetative height and responses to edaphic compared to climatic factors. Individual species displayed strong trait responses to environmental factors in many cases, but these responses were highly variable among species and did not usually scale up to the community level. These findings provide new insights into the role of intraspecific trait variation in plant communities and the factors controlling its relative importance. The contribution of intraspecific variation to community trait responses was greatest at fine spatial scales and along edaphic gradients, while species turnover dominated at broad spatial scales and along climatic gradients.
Directory of Open Access Journals (Sweden)
Jihoon Oh
2017-09-01
Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.
Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors
Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Vrendenburg, Vance T.; Rosenblum, Erica Bree; Briggs, Cheryl J.
2016-01-01
Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.
Termites Are Resistant to the Effects of Fire at Multiple Spatial Scales.
Directory of Open Access Journals (Sweden)
Sarah C Avitabile
Full Text Available Termites play an important ecological role in many ecosystems, particularly in nutrient-poor arid and semi-arid environments. We examined the distribution and occurrence of termites in the fire-prone, semi-arid mallee region of south-eastern Australia. In addition to periodic large wildfires, land managers use fire as a tool to achieve both asset protection and ecological outcomes in this region. Twelve taxa of termites were detected by using systematic searches and grids of cellulose baits at 560 sites, clustered in 28 landscapes selected to represent different fire mosaic patterns. There was no evidence of a significant relationship between the occurrence of termite species and time-since-fire at the site scale. Rather, the occurrence of species was related to habitat features such as the density of mallee trees and large logs (>10 cm diameter. Species richness was greater in chenopod mallee vegetation on heavier soils in swales, rather than Triodia mallee vegetation of the sandy dune slopes. At the landscape scale, there was little evidence that the frequency of occurrence of termite species was related to fire, and no evidence that habitat heterogeneity generated by fire influenced termite species richness. The most influential factor at the landscape scale was the environmental gradient represented by average annual rainfall. Although termites may be associated with flammable habitat components (e.g. dead wood, they appear to be buffered from the effects of fire by behavioural traits, including nesting underground, and the continued availability of dead wood after fire. There is no evidence to support the hypothesis that a fine-scale, diverse mosaic of post-fire age-classes will enhance the diversity of termites. Rather, termites appear to be resistant to the effects of fire at multiple spatial scales.
Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho
2017-01-01
Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Short scales to assess cannabis-related problems: a review of psychometric properties
Directory of Open Access Journals (Sweden)
Klempova Danica
2008-12-01
Full Text Available Abstract Aims The purpose of this paper is to summarize the psychometric properties of four short screening scales to assess problematic forms of cannabis use: Severity of Dependence Scale (SDS, Cannabis Use Disorders Identification Test (CUDIT, Cannabis Abuse Screening Test (CAST and Problematic Use of Marijuana (PUM. Methods A systematic computer-based literature search was conducted within the databases of PubMed, PsychINFO and Addiction Abstracts. A total of 12 publications reporting measures of reliability or validity were identified: 8 concerning SDS, 2 concerning CUDIT and one concerning CAST and PUM. Studies spanned adult and adolescent samples from general and specific user populations in a number of countries worldwide. Results All screening scales tended to have moderate to high internal consistency (Cronbach's α ranging from .72 to .92. Test-retest reliability and item total correlation have been reported for SDS with acceptable results. Results of validation studies varied depending on study population and standards used for validity assessment, but generally sensitivity, specificity and predictive power are satisfactory. Standard diagnostic cut-off points that can be generalized to different populations do not exist for any scale. Conclusion Short screening scales to assess dependence and other problems related to the use of cannabis seem to be a time and cost saving opportunity to estimate overall prevalences of cannabis-related negative consequences and to identify at-risk persons prior to using more extensive diagnostic instruments. Nevertheless, further research is needed to assess the performance of the tests in different populations and in comparison to broader criteria of cannabis-related problems other than dependence.
Xiang, Wei; Yin, Jiao; Lim, Gino
2015-02-01
Operating room (OR) surgery scheduling determines the individual surgery's operation start time and assigns the required resources to each surgery over a schedule period, considering several constraints related to a complete surgery flow and the multiple resources involved. This task plays a decisive role in providing timely treatments for the patients while balancing hospital resource utilization. The originality of the present study is to integrate the surgery scheduling problem with real-life nurse roster constraints such as their role, specialty, qualification and availability. This article proposes a mathematical model and an ant colony optimization (ACO) approach to efficiently solve such surgery scheduling problems. A modified ACO algorithm with a two-level ant graph model is developed to solve such combinatorial optimization problems because of its computational complexity. The outer ant graph represents surgeries, while the inner graph is a dynamic resource graph. Three types of pheromones, i.e. sequence-related, surgery-related, and resource-related pheromone, fitting for a two-level model are defined. The iteration-best and feasible update strategy and local pheromone update rules are adopted to emphasize the information related to the good solution in makespan, and the balanced utilization of resources as well. The performance of the proposed ACO algorithm is then evaluated using the test cases from (1) the published literature data with complete nurse roster constraints, and 2) the real data collected from a hospital in China. The scheduling results using the proposed ACO approach are compared with the test case from both the literature and the real life hospital scheduling. Comparison results with the literature shows that the proposed ACO approach has (1) an 1.5-h reduction in end time; (2) a reduction in variation of resources' working time, i.e. 25% for ORs, 50% for nurses in shift 1 and 86% for nurses in shift 2; (3) an 0.25h reduction in
De Visscher, Alice; Vogel, Stephan E; Reishofer, Gernot; Hassler, Eva; Koschutnig, Karl; De Smedt, Bert; Grabner, Roland H
2018-05-15
In the development of math ability, a large variability of performance in solving simple arithmetic problems is observed and has not found a compelling explanation yet. One robust effect in simple multiplication facts is the problem size effect, indicating better performance for small problems compared to large ones. Recently, behavioral studies brought to light another effect in multiplication facts, the interference effect. That is, high interfering problems (receiving more proactive interference from previously learned problems) are more difficult to retrieve than low interfering problems (in terms of physical feature overlap, namely the digits, De Visscher and Noël, 2014). At the behavioral level, the sensitivity to the interference effect is shown to explain individual differences in the performance of solving multiplications in children as well as in adults. The aim of the present study was to investigate the individual differences in multiplication ability in relation to the neural interference effect and the neural problem size effect. To that end, we used a paradigm developed by De Visscher, Berens, et al. (2015) that contrasts the interference effect and the problem size effect in a multiplication verification task, during functional magnetic resonance imaging (fMRI) acquisition. Forty-two healthy adults, who showed high variability in an arithmetic fluency test, participated in our fMRI study. In order to control for the general reasoning level, the IQ was taken into account in the individual differences analyses. Our findings revealed a neural interference effect linked to individual differences in multiplication in the left inferior frontal gyrus, while controlling for the IQ. This interference effect in the left inferior frontal gyrus showed a negative relation with individual differences in arithmetic fluency, indicating a higher interference effect for low performers compared to high performers. This region is suggested in the literature to be
Multiple time scale analysis of sediment and runoff changes in the Lower Yellow River
Directory of Open Access Journals (Sweden)
K. Chi
2018-06-01
Full Text Available Sediment and runoff changes of seven hydrological stations along the Lower Yellow River (LYR (Huayuankou Station, Jiahetan Station, Gaocun Station, Sunkou Station, Ai Shan Station, Qikou Station and Lijin Station from 1980 to 2003 were alanyzed at multiple time scale. The maximum value of monthly, daily and hourly sediment load and runoff conservations were also analyzed with the annually mean value. Mann–Kendall non-parametric mathematics correlation test and Hurst coefficient method were adopted in the study. Research results indicate that (1 the runoff of seven hydrological stations was significantly reduced in the study period at different time scales. However, the trends of sediment load in these stations were not obvious. The sediment load of Huayuankou, Jiahetan and Aishan stations even slightly increased with the runoff decrease. (2 The trends of the sediment load with different time scale showed differences at Luokou and Lijin stations. Although the annually and monthly sediment load were broadly flat, the maximum hourly sediment load showed decrease trend. (3 According to the Hurst coefficients, the trend of sediment and runoff will be continue without taking measures, which proved the necessary of runoff-sediment regulation scheme.
Ryan, Joseph J; Gontkovsky, Samuel T; Kreiner, David S; Tree, Heather A
2012-01-01
Forty patients with relapsing-remitting multiple sclerosis (MS) completed the 10 core Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests. Means for age and education were 42.05 years (SD = 9.94) and 14.33 years (SD = 2.40). For all participants, the native language was English. The mean duration of MS diagnosis was 8.17 years (SD = 7.75), and the mean Expanded Disability Status Scale (EDSS; Kurtzke, 1983 ) score was 3.73 (SD = 1.41) with a range from 2.0 to 6.5. A control group of healthy individuals with similar demographic characteristics also completed the WAIS-IV and were provided by the test publisher. Compared to controls, patients with MS earned significantly lower subtest and composite scores. The patients' mean scores were consistently in the low-average to average range, and the patterns of performance across groups did not differ significantly, although there was a trend towards higher scores on the Verbal Comprehension Index (VCI) and lower scores on the Processing Speed Index (PSI). Approximately 78% of patients had actual Full Scale IQs that were significantly lower than preillness, demographically based IQ estimates.
Directory of Open Access Journals (Sweden)
Yi Zhang
2017-04-01
Full Text Available Ear detection is an important step in ear recognition approaches. Most existing ear detection techniques are based on manually designing features or shallow learning algorithms. However, researchers found that the pose variation, occlusion, and imaging conditions provide a great challenge to the traditional ear detection methods under uncontrolled conditions. This paper proposes an efficient technique involving Multiple Scale Faster Region-based Convolutional Neural Networks (Faster R-CNN to detect ears from 2D profile images in natural images automatically. Firstly, three regions of different scales are detected to infer the information about the ear location context within the image. Then an ear region filtering approach is proposed to extract the correct ear region and eliminate the false positives automatically. In an experiment with a test set of 200 web images (with variable photographic conditions, 98% of ears were accurately detected. Experiments were likewise conducted on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2 and University of Beira Interior Ear dataset (UBEAR, which contain large occlusion, scale, and pose variations. Detection rates of 100% and 98.22%, respectively, demonstrate the effectiveness of the proposed approach.
Quantitative evidence for the effects of multiple drivers on continental-scale amphibian declines
Grant, Evan H. Campbell; Miller, David A. W.; Schmidt, Benedikt R.; Adams, Michael J.; Amburgey, Staci M.; Chambert, Thierry A.; Cruickshank, Sam S.; Fisher, Robert N.; Green, David M.; Hossack, Blake R.; Johnson, Pieter T.J.; Joseph, Maxwell B.; Rittenhouse, Tracy A. G.; Ryan, Maureen E.; Waddle, J. Hardin; Walls, Susan C.; Bailey, Larissa L.; Fellers, Gary M.; Gorman, Thomas A.; Ray, Andrew M.; Pilliod, David S.; Price, Steven J.; Saenz, Daniel; Sadinski, Walt; Muths, Erin L.
2016-01-01
Since amphibian declines were first proposed as a global phenomenon over a quarter century ago, the conservation community has made little progress in halting or reversing these trends. The early search for a “smoking gun” was replaced with the expectation that declines are caused by multiple drivers. While field observations and experiments have identified factors leading to increased local extinction risk, evidence for effects of these drivers is lacking at large spatial scales. Here, we use observations of 389 time-series of 83 species and complexes from 61 study areas across North America to test the effects of 4 of the major hypothesized drivers of declines. While we find that local amphibian populations are being lost from metapopulations at an average rate of 3.79% per year, these declines are not related to any particular threat at the continental scale; likewise the effect of each stressor is variable at regional scales. This result - that exposure to threats varies spatially, and populations vary in their response - provides little generality in the development of conservation strategies. Greater emphasis on local solutions to this globally shared phenomenon is needed.
Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.
2013-12-01
Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at
Nishio, Midori; Ono, Mitsu
2015-01-01
The number of male caregivers has increased, but male caregivers face several problems that reduce their quality of life and psychological condition. This study focused on the coping problems of men who care for people with dementia at home. It aimed to develop a coping scale for male caregivers so that they can continue caring for people with dementia at home and improve their own quality of life. The study also aimed to verify the reliability and validity of the scale. The subjects were 759 men who care for people with dementia at home. The Care Problems Coping Scale consists of 21 questions based on elements of questions extracted from a pilot study. Additionally, subjects completed three self-administered questionnaires: the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and the Self-esteem Emotional Scale, and Rosenberg Self-Esteem Scale. There were 274 valid responses (36.1% response rate). Regarding the answer distribution, each average value of the 21 items ranged from 1.56 to 2.68. The median answer distribution of the 21 items was 39 (SD = 6.6). Five items had a ceiling effect, and two items had a floor effect. The scale stability was about 50%, and Cronbach's α was 0.49. There were significant correlations between the Care Problems Coping Scale and total scores of the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and Self-esteem Emotional Scale, and the Rosenberg Self-Esteem Scale. The answers provided on the Care Problems Coping Scale questionnaire indicated that male caregivers experience care problems. In terms of validity, there were significant correlations between the external questionnaires and 19 of the 21 items in this scale. This scale can therefore be used to measure problems with coping for male caregivers who care for people with dementia at home.
Penders, Bart; Vos, Rein; Horstman, Klasien
2009-11-01
Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.
International Nuclear Information System (INIS)
Lu Yanlin; Zhou Xiao; Qu Jiadi; Dou Yikang; He Yinbiao
2005-01-01
An efficient scheme, 3-D thermal weight function (TWF) method, and a novel numerical technique, multiple virtual crack extension (MVCE) technique, were developed for determination of histories of transient stress intensity factor (SIF) distributions along 3-D crack fronts of a body subjected to thermal shock. The TWF is a universal function, which is dependent only on the crack configuration and body geometry. TWF is independent of time during thermal shock, so the whole history of transient SIF distributions along crack fronts can be directly calculated through integration of the products of TWF and transient temperatures and temperature gradients. The repeated determinations of the distributions of stresses (or displacements) fields for individual time instants are thus avoided in the TWF method. An expression of the basic equation for the 3-D universal weight function method for Mode I in an isotropic elastic body is derived. This equation can also be derived from Bueckner-Rice's 3-D WF formulations in the framework of transformation strain. It can be understood from this equation that the so-called thermal WF is in fact coincident with the mechanical WF except for some constants of elasticity. The details and formulations of the MVCE technique are given for elliptical cracks. The MVCE technique possesses several advantages. The specially selected linearly independent VCE modes can directly be used as shape functions for the interpolation of unknown SIFs. As a result, the coefficient matrix of the final system of equations in the MVCE method is a triple-diagonal matrix and the values of the coefficients on the main diagonal are large. The system of equations has good numerical properties. The number of linearly independent VCE modes that can be introduced in a problem is unlimited. Complex situations in which the SIFs vary dramatically along crack fronts can be numerically well simulated by the MVCE technique. An integrated system of programs for solving the
International Nuclear Information System (INIS)
Yang, W.; Wu, H.; Cao, L.
2012-01-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO 2 fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for 240 Pu and 242 Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
Self-interacting inelastic dark matter: a viable solution to the small scale structure problems
Energy Technology Data Exchange (ETDEWEB)
Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology, AlbaNova University Center, 106 91 Stockholm (Sweden)
2017-03-01
Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution to reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...
Directory of Open Access Journals (Sweden)
Faridah Hani Mohamed Salleh
2017-01-01
Full Text Available Gene regulatory network (GRN reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C as a direct interaction (A → C. Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.
Salleh, Faridah Hani Mohamed; Zainudin, Suhaila; Arif, Shereena M
2017-01-01
Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.
DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs
Shaik, Naj; Lowe, Sue; Pinegar, Kem
2006-01-01
Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…
International Nuclear Information System (INIS)
Foot, Robert; Kobakhidze, Archil; Volkas, Raymond R.; McDonald, Kristian L.
2008-01-01
If scale invariance is a classical symmetry then both the Planck scale and the weak scale should emerge as quantum effects. We show that this can be realized in simple scale invariant theories with a hidden sector. The weak/Planck scale hierarchy emerges in the (technically natural) limit in which the hidden sector decouples from the ordinary sector. In this limit, finite corrections to the weak scale are consequently small, while quadratic divergences are absent by virtue of classical scale invariance, so there is no hierarchy problem
Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.
Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.
yoder, M. R.; Rundle, J. B.; Turcotte, D. L.
2012-12-01
The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard
Mercury exposure of workers and health problems related with small-scale gold panning and extraction
International Nuclear Information System (INIS)
Khan, S.; Shah, M.T.; Din, I.U.; Rehman, S.
2012-01-01
This study was conducted to investigate mercury (Hg) exposure and health problems related to small-scale gold panning and extraction (GPE) in the northern Pakistan. Urine and blood samples of occupational and non-occupational persons were analyzed for total Hg, while blood's fractions including red blood cells and plasma were analyzed for total Hg and its inorganic and organic species. The concentrations of Hg in urine and blood samples were significantly (P<0.01) higher in occupational persons as compared to non-occupational and exceeded the permissible limits set by World Health Organization (WHO) and United State Environmental Protection Agency (US-EPA). Furthermore, the data indicated that numerous health problems were present in occupational persons involved in GPE. (author)
Ozone flux of an urban orange grove: multiple scaled measurements and model comparisons
Alstad, K. P.; Grulke, N. E.; Jenerette, D. G.; Schilling, S.; Marrett, K.
2009-12-01
There is significant uncertainty about the ozone sink properties of the phytosphere due to a complexity of interactions and feedbacks with biotic and abiotic factors. Improved understanding of the controls on ozone fluxes is critical to estimating and regulating the total ozone budget. Ozone exchanges of an orange orchard within the city of Riverside, CA were examined using a multiple-scaled approach. We access the carbon, water, and energy budgets at the stand- to leaf- level to elucidate the mechanisms controlling the variability in ozone fluxes of this agro-ecosystem. The two initial goals of the study were 1. To consider variations and controls on the ozone fluxes within the canopy; and, 2. To examine different modeling and scaling approaches for totaling the ozone fluxes of this orchard. Current understanding of the total ozone flux between the atmosphere near ground and the phytosphere (F-total) include consideration of a fraction which is absorbed by vegetation through stomatal uptake (F-absorb), and fractional components of deposition on external, non-stomatal, surfaces of the vegetation (F-external) and soil (F-soil). Multiplicative stomatal-conductance models have been commonly used to estimate F-absorb, since this flux cannot be measured directly. We approach F-absorb estimates for this orange orchard using chamber measurement of leaf stomatal-conductance, as well as non-chamber sap-conductance collected on branches of varied aspect and sun/shade conditions within the canopy. We use two approaches to measure the F-total of this stand. Gradient flux profiles were measured using slow-response ozone sensors collecting within and above the canopy (4.6 m), and at the top of the tower (8.5 m). In addition, an eddy-covariance system fitted with a high-frequency chemiluminescence ozone system will be deployed (8.5 m). Preliminary ozone gradient flux profiles demonstrate a substantial ozone sink strength of this orchard, with diurnal concentration differentials
Physical modelling of granular flows at multiple-scales and stress levels
Take, Andy; Bowman, Elisabeth; Bryant, Sarah
2015-04-01
The rheology of dry granular flows is an area of significant focus within the granular physics, geoscience, and geotechnical engineering research communities. Studies performed to better understand granular flows in manufacturing, materials processing or bulk handling applications have typically focused on the behavior of steady, continuous flows. As a result, much of the research on relating the fundamental interaction of particles to the rheological or constitutive behaviour of granular flows has been performed under (usually) steady-state conditions and low stress levels. However, landslides, which are the primary focus of the geoscience and geotechnical engineering communities, are by nature unsteady flows defined by a finite source volume and at flow depths much larger than typically possible in laboratory experiments. The objective of this paper is to report initial findings of experimental studies currently being conducted using a new large-scale landslide flume (8 m long, 2 m wide slope inclined at 30° with a 35 m long horizontal base section) and at elevated particle self-weight in a 10 m diameter geotechnical centrifuge to investigate the granular flow behavior at multiple-scales and stress levels. The transparent sidewalls of the two flumes used in the experimental investigation permit the combination of observations of particle-scale interaction (using high-speed imaging through transparent vertical sidewalls at over 1000 frames per second) with observations of the distal reach of the landslide debris. These observations are used to investigate the applicability of rheological models developed for steady state flows (e.g. the dimensionless inertial number) in landslide applications and the robustness of depth-averaged approaches to modelling dry granular flow at multiple scales. These observations indicate that the dimensionless inertial number calculated for the flow may be of limited utility except perhaps to define a general state (e.g. liquid
Airlie, J; Baker, G A; Smith, S J; Young, C A
2001-06-01
To develop a scale to measure self-efficacy in neurologically impaired patients with multiple sclerosis and to assess the scale's psychometric properties. Cross-sectional questionnaire study in a clinical setting, the retest questionnaire returned by mail after completion at home. Regional multiple sclerosis (MS) outpatient clinic or the Clinical Trials Unit (CTU) at a large neuroscience centre in the UK. One hundred persons with MS attending the Walton Centre for Neurology and Neurosurgery and Clatterbridge Hospital, Wirral, as outpatients. Cognitively impaired patients were excluded at an initial clinic assessment. Patients were asked to provide demographic data and complete the self-efficacy scale along with the following validated scales: Hospital Anxiety and Depression Scale, Rosenberg Self-Esteem Scale, Impact, Stigma and Mastery and Rankin Scales. The Rankin Scale and Barthel Index were also assessed by the physician. A new 11-item self-efficacy scale was constructed consisting of two domains of control and personal agency. The validity of the scale was confirmed using Cronbach's alpha analysis of internal consistency (alpha = 0.81). The test-retest reliability of the scale over two weeks was acceptable with an intraclass correlation coefficient of 0.79. Construct validity was investigated using Pearson's product moment correlation coefficient resulting in significant correlations with depression (r= -0.52) anxiety (r =-0.50) and mastery (r= 0.73). Multiple regression analysis demonstrated that these factors accounted for 70% of the variance of scores on the self-efficacy scale, with scores on mastery, anxiety and perceived disability being independently significant. Assessment of the psychometric properties of this new self-efficacy scale suggest that it possesses good validity and reliability in patients with multiple sclerosis.
Mapping the MMPI-2-RF Specific Problems Scales Onto Extant Psychopathology Structures.
Sellbom, Martin
2017-01-01
A main objective in developing the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008 ) was to link the hierarchical structure of the instrument's scales to contemporary psychopathology and personality models for greater enhancement of construct validity. Initial evidence published with the Restructured Clinical scales has indicated promising results in that the higher order structure of these measures maps onto those reported in the extant psychopathology literature. This study focused on evaluating the internal structure of the Specific Problems and Interest scales, which have not yet been examined in this manner. Two large, mixed-gender outpatient and correctional samples were used. Exploratory factor analyses revealed consistent evidence for a 4-factor structure representing somatization, negative affect, externalizing, and social detachment. Convergent and discriminant validity analyses in the outpatient sample yielded a pattern of results consistent with expectations. These findings add further evidence to indicate that the MMPI-2-RF hierarchy of scales map onto extant psychopathology literature, and also add support to the notion that somatization and detachment should be considered important higher order domains in the psychopathology literature.
Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs
Kalenichenko, V. V.
1992-08-01
A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.
Improvement of Monte Carlo code A3MCNP for large-scale shielding problems
International Nuclear Information System (INIS)
Miyake, Y.; Ohmura, M.; Hasegawa, T.; Ueki, K.; Sato, O.; Haghighat, A.; Sjoden, G.E.
2004-01-01
A 3 MCNP (Automatic Adjoint Accelerated MCNP) is a revised version of the MCNP Monte Carlo code, that automatically prepares variance reduction parameters for the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using a deterministic 'importance' (or adjoint) function, CADIS performs source and transport biasing within the weight-window technique. The current version of A 3 MCNP uses the 3-D Sn transport TORT code to determine a 3-D importance function distribution. Based on simulation of several real-life problems, it is demonstrated that A 3 MCNP provides precise calculation results with a remarkably short computation time by using the proper and objective variance reduction parameters. However, since the first version of A 3 MCNP provided only a point source configuration option for large-scale shielding problems, such as spent-fuel transport casks, a large amount of memory may be necessary to store enough points to properly represent the source. Hence, we have developed an improved version of A 3 MCNP (referred to as A 3 MCNPV) which has a volumetric source configuration option. This paper describes the successful use of A 3 MCNPV for a concrete cask streaming problem and a PWR dosimetry problem. (author)
Wilson, Robyn S.; Hardisty, David J.; Epanchin-Niell, Rebecca S.; Runge, Michael C.; Cottingham, Kathryn L.; Urban, Dean L.; Maguire, Lynn A.; Hastings, Alan; Mumby, Peter J.; Peters, Debra P.C.
2016-01-01
Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers’ actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed.
Wilson, Robyn S; Hardisty, David J; Epanchin-Niell, Rebecca S; Runge, Michael C; Cottingham, Kathryn L; Urban, Dean L; Maguire, Lynn A; Hastings, Alan; Mumby, Peter J; Peters, Debra P C
2016-02-01
Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers' actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed. © 2015
Lu, Hua; Yue, Zengqi; Zhao, Jianlin
2018-05-01
We propose and investigate a new kind of bandpass filters based on the plasmonically induced transparency (PIT) effect in a special metal-insulator-metal (MIM) waveguide system. The finite element method (FEM) simulations illustrate that the obvious PIT response can be generated in the metallic nanostructure with the stub and coupled cavities. The lineshape and position of the PIT peak are particularly dependent on the lengths of the stub and coupled cavities, the waveguide width, as well as the coupling distance between the stub and coupled cavities. The numerical simulations are in accordance with the results obtained by the temporal coupled-mode theory. The multi-peak PIT effect can be achieved by integrating multiple coupled cavities into the plasmonic waveguide. This PIT response contributes to the flexible realization of chip-scale multi-channel bandpass filters, which could find crucial applications in highly integrated optical circuits for signal processing.
DEFF Research Database (Denmark)
Zhao, Zhuoli; Yang, Ping; Guerrero, Josep M.
2016-01-01
In this paper, an islanded medium-voltage (MV) microgrid placed in Dongao Island is presented, which integrates renewable-energy-based distributed generations (DGs), energy storage system (ESS), and local loads. In an isolated microgrid without connection to the main grid to support the frequency......, it is more complex to control and manage. Thus in order to maintain the frequency stability in multiple-time-scales, a hierarchical control strategy is proposed. The proposed control architecture divides the system frequency in three zones: (A) stable zone, (B) precautionary zone and (C) emergency zone...... of Zone B. Theoretical analysis, time-domain simulation and field test results under various conditions and scenarios in the Dongao Island microgrid are presented to prove the validity of the introduced control strategy....
van den Putte, B.; Saris, W.E.; Hoogstraten, J.
1995-01-01
Two experiments were carried out to test the theory of reasoned action of Fishbein and Ajzen. The measurements were done using two category scales and two psychophysical scales. No consistent difference in results was found between the four modalities. However, if the latter were used as multiple
Directory of Open Access Journals (Sweden)
Claudia eCasellato
2015-02-01
Full Text Available The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.
Does the Assessment of Recovery Capital scale reflect a single or multiple domains?
Arndt, Stephan; Sahker, Ethan; Hedden, Suzy
2017-01-01
The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.
How do the multiple large-scale climate oscillations trigger extreme precipitation?
Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu
2017-10-01
Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.
Urban land use decouples plant-herbivore-parasitoid interactions at multiple spatial scales.
Directory of Open Access Journals (Sweden)
Amanda E Nelson
Full Text Available Intense urban and agricultural development alters habitats, increases fragmentation, and may decouple trophic interactions if plants or animals cannot disperse to needed resources. Specialist insects represent a substantial proportion of global biodiversity and their fidelity to discrete microhabitats provides a powerful framework for investigating organismal responses to human land use. We sampled site occupancy and densities for two plant-herbivore-parasitoid systems from 250 sites across a 360 km2 urban/agricultural landscape to ask whether and how human development decouples interactions between trophic levels. We compared patterns of site occupancy, host plant density, herbivory and parasitism rates of insects at two trophic levels with respect to landcover at multiple spatial scales. Geospatial analyses were used to identify landcover characters predictive of insect distributions. We found that herbivorous insect densities were decoupled from host tree densities in urban landcover types at several spatial scales. This effect was amplified for the third trophic level in one of the two insect systems: despite being abundant regionally, a parasitoid species was absent from all urban/suburban landcover even where its herbivore host was common. Our results indicate that human land use patterns limit distributions of specialist insects. Dispersal constraints associated with urban built development are specifically implicated as a limiting factor.
Quantifying Contributions to Transport in Ionic Polymers Across Multiple Length Scales
Madsen, Louis
Self-organized polymer membranes conduct mobile species (ions, water, alcohols, etc.) according to a hierarchy of structural motifs that span sub-nm to >10 μm in length scale. In order to comprehensively understand such materials, our group combines multiple types of NMR dynamics and transport measurements (spectroscopy, diffusometry, relaxometry, imaging) with structural information from scattering and microscopy as well as with theories of porous media,1 electrolytic transport, and oriented matter.2 In this presentation, I will discuss quantitative separation of the phenomena that govern transport in polymer membranes, from intermolecular interactions (<= 2 nm),3 to locally ordered polymer nanochannels (a few to 10s of nm),2 to larger polymer domain structures (10s of nm and larger).1 Using this multi-scale information, we seek to give informed feedback on the design of polymer membranes for use in, e . g . , efficient batteries, fuel cells, and mechanical actuators. References: [1] J. Hou, J. Li, D. Mountz, M. Hull, and L. A. Madsen. Journal of Membrane Science448, 292-298 (2013). [2] J. Li, J. K. Park, R. B. Moore, and L. A. Madsen. Nature Materials 10, 507-511 (2011). [3] M. D. Lingwood, Z. Zhang, B. E. Kidd, K. B. McCreary, J. Hou, and L. A. Madsen. Chemical Communications 49, 4283 - 4285 (2013).
International Nuclear Information System (INIS)
Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo
2004-01-01
The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed
An Analysis of the HEU-MET-FAST-035 Problem Using CENTRM and SCALE
International Nuclear Information System (INIS)
Hollenbach, D.F.; Jordan, W.C.
1999-01-01
An U/Fe benchmark, designated as HEU-MET-FAST-035, has been approved for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The SCALE code and cross sections performed poorly in calculating this critical experiment. Deficiencies in both the ENDF/B-V representation of the resonance region for Fe and in the Nordheim integral treatment when applied to Fe were identified. The combination of these deficiencies led to an almost 10% over-prediction of k(eff). Problems involving a large percentage of Fe and intermediate-energy spectrums present special cross-section processing difficulties for SCALE. In ENDF/B-V, resonance data for Fe only go to 400 keV, although resonances are present well above 1 MeV. Significant resonance data are stored as file 3 data instead of as resonance parameters. The Nordheim Integral Treatment used in NITAWL to process cross sections assume: resonances are widely spaced and all relevant information is contained in the resonance parameters (file 3 data is not processed). These limitations and assumptions result in poor solutions for this class of problems
A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems
International Nuclear Information System (INIS)
Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun
2010-01-01
The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.
Beison, Ashley; Rademacher, David J
2017-03-01
Background and aims Smartphones are ubiquitous. As smartphones increased in popularity, researchers realized that people were becoming dependent on their smartphones. The purpose here was to provide a better understanding of the factors related to problematic smartphone use (PSPU). Methods The participants were 100 undergraduates (25 males, 75 females) whose ages ranged from 18 to 23 (mean age = 20 years). The participants completed questionnaires to assess gender, ethnicity, year in college, father's education level, mother's education level, family income, age, family history of alcoholism, and PSPU. The Family Tree Questionnaire assessed family history of alcoholism. The Mobile Phone Problem Use Scale (MPPUS) and the Adapted Cell Phone Addiction Test (ACPAT) were used to determine the degree of PSPU. Whereas the MPPUS measures tolerance, escape from other problems, withdrawal, craving, and negative life consequences, the ACPAT measures preoccupation (salience), excessive use, neglecting work, anticipation, lack of control, and neglecting social life. Results Family history of alcoholism and father's education level together explained 26% of the variance in the MPPUS scores and 25% of the variance in the ACPAT scores. The inclusion of mother's education level, ethnicity, family income, age, year in college, and gender did not significantly increase the proportion of variance explained for either MPPUS or ACPAT scores. Discussion and conclusions Family history of alcoholism and father's education level are good predictors of PSPU. As 74%-75% of the variance in PSPU scale scores was not explained, future studies should aim to explain this variance.
EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.
Tal-Ezer, Hillel
2016-05-19
Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.
Nakhostin Ansari, Noureddin; Naghdi, Soofia; Mohammadi, Roghaye; Hasson, Scott
2015-02-01
The Multiple Sclerosis Walking Scale-12 (MSWS-12) is a multi-item rating scale used to assess the perspectives of patients about the impact of MS on their walking ability. The aim of this study was to examine the reliability and validity of the MSWS-12 in Persian speaking patients with MS. The MSWS-12 questionnaire was translated into Persian language according to internationally adopted standards involving forward-backward translation, reviewed by an expert committee and tested on the pre-final version. In this cross-sectional study, 100 participants (50 patients with MS and 50 healthy subjects) were included. The MSWS-12 was administered twice 7 days apart to 30 patients with MS for test and retest reliability. Internal consistency reliability was Cronbach's α 0.96 for test and 0.97 for retest. There were no significant floor or ceiling effects. Test-retest reliability was excellent (intraclass correlation coefficient [ICC] agreement of 0.98, 95% CI, 0.95-0.99) confirming the reproducibility of the Persian MSWS-12. Construct validity using known group methods was demonstrated through a significant difference in the Persian MSWS-12 total score between the patients with MS and healthy subjects. Factor analysis extracted 2 latent factors (79.24% of the total variance). A second factor analysis suggested the 9-item Persian MSWS as a unidimensional scale for patients with MS. The Persian MSWS-12 was found to be valid and reliable for assessing walking ability in Persian speaking patients with MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Ozdemir, S.; Reis, Z. Ayvaz
2013-01-01
Mathematics is an important discipline, providing crucial tools, such as problem solving, to improve our cognitive abilities. In order to solve a problem, it is better to envision and represent through multiple means. Multiple representations can help a person to redefine a problem with his/her own words in that envisioning process. Dynamic and…
Error due to unresolved scales in estimation problems for atmospheric data assimilation
Janjic, Tijana
The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to
Decomposition and parallelization strategies for solving large-scale MDO problems
Energy Technology Data Exchange (ETDEWEB)
Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)
2007-07-01
During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)
Coderre, Sylvain P; Harasym, Peter; Mandin, Henry; Fick, Gordon
2004-11-05
Pencil-and-paper examination formats, and specifically the standard, five-option multiple-choice question, have often been questioned as a means for assessing higher-order clinical reasoning or problem solving. This study firstly investigated whether two paper formats with differing number of alternatives (standard five-option and extended-matching questions) can test problem-solving abilities. Secondly, the impact of the alternatives number on psychometrics and problem-solving strategies was examined. Think-aloud protocols were collected to determine the problem-solving strategy used by experts and non-experts in answering Gastroenterology questions, across the two pencil-and-paper formats. The two formats demonstrated equal ability in testing problem-solving abilities, while the number of alternatives did not significantly impact psychometrics or problem-solving strategies utilized. These results support the notion that well-constructed multiple-choice questions can in fact test higher order clinical reasoning. Furthermore, it can be concluded that in testing clinical reasoning, the question stem, or content, remains more important than the number of alternatives.
Gardner, Toby A.; Ferreira, Joice; Barlow, Jos; Lees, Alexander C.; Parry, Luke; Vieira, Ima Célia Guimarães; Berenguer, Erika; Abramovay, Ricardo; Aleixo, Alexandre; Andretti, Christian; Aragão, Luiz E. O. C.; Araújo, Ivanei; de Ávila, Williams Souza; Bardgett, Richard D.; Batistella, Mateus; Begotti, Rodrigo Anzolin; Beldini, Troy; de Blas, Driss Ezzine; Braga, Rodrigo Fagundes; Braga, Danielle de Lima; de Brito, Janaína Gomes; de Camargo, Plínio Barbosa; Campos dos Santos, Fabiane; de Oliveira, Vívian Campos; Cordeiro, Amanda Cardoso Nunes; Cardoso, Thiago Moreira; de Carvalho, Déborah Reis; Castelani, Sergio André; Chaul, Júlio Cézar Mário; Cerri, Carlos Eduardo; Costa, Francisco de Assis; da Costa, Carla Daniele Furtado; Coudel, Emilie; Coutinho, Alexandre Camargo; Cunha, Dênis; D'Antona, Álvaro; Dezincourt, Joelma; Dias-Silva, Karina; Durigan, Mariana; Esquerdo, Júlio César Dalla Mora; Feres, José; Ferraz, Silvio Frosini de Barros; Ferreira, Amanda Estefânia de Melo; Fiorini, Ana Carolina; da Silva, Lenise Vargas Flores; Frazão, Fábio Soares; Garrett, Rachel; Gomes, Alessandra dos Santos; Gonçalves, Karoline da Silva; Guerrero, José Benito; Hamada, Neusa; Hughes, Robert M.; Igliori, Danilo Carmago; Jesus, Ederson da Conceição; Juen, Leandro; Junior, Miércio; Junior, José Max Barbosa de Oliveira; Junior, Raimundo Cosme de Oliveira; Junior, Carlos Souza; Kaufmann, Phil; Korasaki, Vanesca; Leal, Cecília Gontijo; Leitão, Rafael; Lima, Natália; Almeida, Maria de Fátima Lopes; Lourival, Reinaldo; Louzada, Júlio; Nally, Ralph Mac; Marchand, Sébastien; Maués, Márcia Motta; Moreira, Fátima M. S.; Morsello, Carla; Moura, Nárgila; Nessimian, Jorge; Nunes, Sâmia; Oliveira, Victor Hugo Fonseca; Pardini, Renata; Pereira, Heloisa Correia; Pompeu, Paulo Santos; Ribas, Carla Rodrigues; Rossetti, Felipe; Schmidt, Fernando Augusto; da Silva, Rodrigo; da Silva, Regina Célia Viana Martins; da Silva, Thiago Fonseca Morello Ramalho; Silveira, Juliana; Siqueira, João Victor; de Carvalho, Teotônio Soares; Solar, Ricardo R. C.; Tancredi, Nicola Savério Holanda; Thomson, James R.; Torres, Patrícia Carignano; Vaz-de-Mello, Fernando Zagury; Veiga, Ruan Carlo Stulpen; Venturieri, Adriano; Viana, Cecília; Weinhold, Diana; Zanetti, Ronald; Zuanon, Jansen
2013-01-01
Science has a critical role to play in guiding more sustainable development trajectories. Here, we present the Sustainable Amazon Network (Rede Amazônia Sustentável, RAS): a multidisciplinary research initiative involving more than 30 partner organizations working to assess both social and ecological dimensions of land-use sustainability in eastern Brazilian Amazonia. The research approach adopted by RAS offers three advantages for addressing land-use sustainability problems: (i) the collection of synchronized and co-located ecological and socioeconomic data across broad gradients of past and present human use; (ii) a nested sampling design to aid comparison of ecological and socioeconomic conditions associated with different land uses across local, landscape and regional scales; and (iii) a strong engagement with a wide variety of actors and non-research institutions. Here, we elaborate on these key features, and identify the ways in which RAS can help in highlighting those problems in most urgent need of attention, and in guiding improvements in land-use sustainability in Amazonia and elsewhere in the tropics. We also discuss some of the practical lessons, limitations and realities faced during the development of the RAS initiative so far. PMID:23610172
Working Memory Components and Problem-Solving Accuracy: Are There Multiple Pathways?
Swanson, H. Lee; Fung, Wenson
2016-01-01
This study determined the working memory (WM) components (executive, phonological short-term memory [STM], and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy in elementary schoolchildren (N = 392). The battery of tests administered to assess mediators between WM and problem-solving included measures of…
Tausendfreund, Tim; Knot-Dickscheit, Jana; Post, Wendy J.; Knorth, Erik J.; Grietens, Hans
2014-01-01
Families who face a multitude of severe and persistent problems in a number of different areas of life are commonly referred to as multi-problem families in Dutch child welfare. Although evidence suggests that short-term crisis interventions can have positive effects in these families, they have up
Outcomes of Domestic Standard Problem-03 : Scaling Capability of Facility Data
Energy Technology Data Exchange (ETDEWEB)
Park, Yusun; Youn, Bumsu; Lee, Seung-won; Kim, Won-tae; Kang, Kyoung-ho; Choi, Ki-yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The Domestic Standard Problem (DSP) previous two DSPs provided good research opportunities to many nuclear organizations to understand the capability of the current system-scale safety analysis codes and to find a way for further code development area. Thus, the third DSP program was launched in the 2nd half of the year 2012. As the third DSP exercise (DSP-03), a double-ended guillotine break of the main steam line at an 8% power without loss of off-site power (LOOP) was decided a target scenario. Seventeen domestic organizations joined this DSP exercise. This DSP exercise was performed in an open calculation environment similar to the previous ones. In the present DSP-03, taking into accounts the different levels of code experiences and expertise, three sub-topics were suggested by operating agency. Among them, the investigation on scaling capability of facility data which was the topic of Group A will be discussed in this paper. Agreed participants should perform two calculations with the ATLAS model and the APR1400 model. By comparing major and detailed local parameters from both calculation models, scaling capability of the facility data was investigated. The 38.6 mm MSLB in ATLAS test facility was calculated using SPACE and MARS-KS code. To analyze the effect of scaling on the system behavior, MSLB in APR-1400 was also simulated in this study and following results were obtained. - The code predicted appropriately the overall MSLB experimental data obtained from ATLAS test facility. - The break flow calculated by code was lower than that of experimental data. - And the difference between calculated value and measured value was attributed to the measurement of mass from break flow. - The temperatures of core inlet and outlet of ATLAS test facility were predicted lower than those of experimental data.
Predicting problem behaviors with multiple expectancies: expanding expectancy-value theory.
Borders, Ashley; Earleywine, Mitchell; Huey, Stanley J
2004-01-01
Expectancy-value theory emphasizes the importance of outcome expectancies for behavioral decisions, but most tests of the theory focus on a single behavior and a single expectancy. However, the matching law suggests that individuals consider expected outcomes for both the target behavior and alternative behaviors when making decisions. In this study, we expanded expectancy-value theory to evaluate the contributions of two competing expectancies to adolescent behavior problems. One hundred twenty-one high school students completed measures of behavior problems, expectancies for both acting out and academic effort, and perceived academic competence. Students' self-reported behavior problems covaried mostly with perceived competence and academic expectancies and only nominally with problem behavior expectancies. We suggest that behavior problems may result from students perceiving a lack of valued or feasible alternative behaviors, such as studying. We discuss implications for interventions and suggest that future research continue to investigate the contribution of alternative expectancies to behavioral decisions.
Does the Assessment of Recovery Capital scale reflect a single or multiple domains?
Directory of Open Access Journals (Sweden)
Arndt S
2017-07-01
Full Text Available Stephan Arndt,1–3 Ethan Sahker,1,4 Suzy Hedden1 1Iowa Consortium for Substance Abuse Research and Evaluation, 2Department of Psychiatry, Carver College of Medicine, 3Department of Biostatistics, College of Public Health, 4Department of Psychological and Quantitative Foundations, Counseling Psychology Program College of Education, University of Iowa, Iowa City, IA, USA Objective: The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Methods: Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. Results: The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker’s congruence coefficients between the factor structure and defining weights (0.41–0.52 suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93, accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Conclusion: Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains. Keywords: social support, psychometrics, quality of life
Spatial heterogeneity regulates plant-pollinator networks across multiple landscape scales.
Directory of Open Access Journals (Sweden)
Eduardo Freitas Moreira
Full Text Available Mutualistic plant-pollinator interactions play a key role in biodiversity conservation and ecosystem functioning. In a community, the combination of these interactions can generate emergent properties, e.g., robustness and resilience to disturbances such as fluctuations in populations and extinctions. Given that these systems are hierarchical and complex, environmental changes must have multiple levels of influence. In addition, changes in habitat quality and in the landscape structure are important threats to plants, pollinators and their interactions. However, despite the importance of these phenomena for the understanding of biological systems, as well as for conservation and management strategies, few studies have empirically evaluated these effects at the network level. Therefore, the objective of this study was to investigate the influence of local conditions and landscape structure at multiple scales on the characteristics of plant-pollinator networks. This study was conducted in agri-natural lands in Chapada Diamantina, Bahia, Brazil. Pollinators were collected in 27 sampling units distributed orthogonally along a gradient of proportion of agriculture and landscape diversity. The Akaike information criterion was used to select models that best fit the metrics for network characteristics, comparing four hypotheses represented by a set of a priori candidate models with specific combinations of the proportion of agriculture, the average shape of the landscape elements, the diversity of the landscape and the structure of local vegetation. The results indicate that a reduction of habitat quality and landscape heterogeneity can cause species loss and decrease of networks nestedness. These structural changes can reduce robustness and resilience of plant-pollinator networks what compromises the reproductive success of plants, the maintenance of biodiversity and the pollination service stability. We also discuss the possible explanations for
Spatial heterogeneity regulates plant-pollinator networks across multiple landscape scales.
Moreira, Eduardo Freitas; Boscolo, Danilo; Viana, Blandina Felipe
2015-01-01
Mutualistic plant-pollinator interactions play a key role in biodiversity conservation and ecosystem functioning. In a community, the combination of these interactions can generate emergent properties, e.g., robustness and resilience to disturbances such as fluctuations in populations and extinctions. Given that these systems are hierarchical and complex, environmental changes must have multiple levels of influence. In addition, changes in habitat quality and in the landscape structure are important threats to plants, pollinators and their interactions. However, despite the importance of these phenomena for the understanding of biological systems, as well as for conservation and management strategies, few studies have empirically evaluated these effects at the network level. Therefore, the objective of this study was to investigate the influence of local conditions and landscape structure at multiple scales on the characteristics of plant-pollinator networks. This study was conducted in agri-natural lands in Chapada Diamantina, Bahia, Brazil. Pollinators were collected in 27 sampling units distributed orthogonally along a gradient of proportion of agriculture and landscape diversity. The Akaike information criterion was used to select models that best fit the metrics for network characteristics, comparing four hypotheses represented by a set of a priori candidate models with specific combinations of the proportion of agriculture, the average shape of the landscape elements, the diversity of the landscape and the structure of local vegetation. The results indicate that a reduction of habitat quality and landscape heterogeneity can cause species loss and decrease of networks nestedness. These structural changes can reduce robustness and resilience of plant-pollinator networks what compromises the reproductive success of plants, the maintenance of biodiversity and the pollination service stability. We also discuss the possible explanations for these relationships and
The e-MSWS-12: improving the multiple sclerosis walking scale using item response theory.
Engelhard, Matthew M; Schmidt, Karen M; Engel, Casey E; Brenton, J Nicholas; Patek, Stephen D; Goldman, Myla D
2016-12-01
The Multiple Sclerosis Walking Scale (MSWS-12) is the predominant patient-reported measure of multiple sclerosis (MS) -elated walking ability, yet it had not been analyzed using item response theory (IRT), the emerging standard for patient-reported outcome (PRO) validation. This study aims to reduce MSWS-12 measurement error and facilitate computerized adaptive testing by creating an IRT model of the MSWS-12 and distributing it online. MSWS-12 responses from 284 subjects with MS were collected by mail and used to fit and compare several IRT models. Following model selection and assessment, subpopulations based on age and sex were tested for differential item functioning (DIF). Model comparison favored a one-dimensional graded response model (GRM). This model met fit criteria and explained 87 % of response variance. The performance of each MSWS-12 item was characterized using category response curves (CRCs) and item information. IRT-based MSWS-12 scores correlated with traditional MSWS-12 scores (r = 0.99) and timed 25-foot walk (T25FW) speed (r = -0.70). Item 2 showed DIF based on age (χ 2 = 19.02, df = 5, p Item 11 showed DIF based on sex (χ 2 = 13.76, df = 5, p = 0.02). MSWS-12 measurement error depends on walking ability, but could be lowered by improving or replacing items with low information or DIF. The e-MSWS-12 includes IRT-based scoring, error checking, and an estimated T25FW derived from MSWS-12 responses. It is available at https://ms-irt.shinyapps.io/e-MSWS-12 .
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A
2016-10-26
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.
2016-01-01
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207
Castelletti, A.; Giuliani, M.; Block, P. J.
2017-12-01
Increasingly uncertain hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide, emphasizing the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite modern forecasts are skillful over short lead time (from hours to days), predictability generally tends to decrease on longer lead times. Global climate teleconnection, such as El Niño Southern Oscillation (ENSO), may contribute in extending forecast lead times. However, ENSO teleconnection is well defined in some locations, such as Western USA and Australia, while there is no consensus on how it can be detected and used in other regions, particularly in Europe, Africa, and Asia. In this work, we generalize the Niño Index Phase Analysis (NIPA) framework by contributing the Multi Variate Niño Index Phase Analysis (MV-NIPA), which allows capturing the state of multiple large-scale climate signals (i.e. ENSO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Multi-decadal Oscillation, Indian Ocean Dipole) to forecast hydroclimatic variables on a seasonal time scale. Specifically, our approach distinguishes the different phases of the considered climate signals and, for each phase, identifies relevant anomalies in Sea Surface Temperature (SST) that influence the local hydrologic conditions. The potential of the MV-NIPA framework is demonstrated through an application to the Lake Como system, a regulated lake in northern Italy which is mainly operated for flood control and irrigation supply. Numerical results show high correlations between seasonal SST values and one season-ahead precipitation in the Lake Como basin. The skill of the resulting MV-NIPA forecast outperforms the one of ECMWF products. This information represents a valuable contribution to partially anticipate the summer water availability, especially during drought events, ultimately supporting the improvement of the Lake Como
Kohrs, F; Heyer, R; Bissinger, T; Kottler, R; Schallert, K; Püttker, S; Behne, A; Rapp, E; Benndorf, D; Reichl, U
2017-08-01
Complex microbial communities are the functional core of anaerobic digestion processes taking place in biogas plants (BGP). So far, however, a comprehensive characterization of the microbiomes involved in methane formation is technically challenging. As an alternative, enriched communities from laboratory-scale experiments can be investigated that have a reduced number of organisms and are easier to characterize by state of the art mass spectrometric-based (MS) metaproteomic workflows. Six parallel laboratory digesters were inoculated with sludge from a full-scale BGP to study the development of enriched microbial communities under defined conditions. During the first three month of cultivation, all reactors (R1-R6) were functionally comparable regarding biogas productions (375-625 NL L reactor volume -1 d -1 ), methane yields (50-60%), pH values (7.1-7.3), and volatile fatty acids (VFA, 1 gNH 3 L -1 ) showed an increase to pH 7.5-8.0, accumulation of acetate (>10 mM), and decreasing biogas production (<125 NL L reactor volume -1 d -1 ). Tandem MS (MS/MS)-based proteotyping allowed the identification of taxonomic abundances and biological processes. Although all reactors showed similar performances, proteotyping and terminal restriction fragment length polymorphisms (T-RFLP) fingerprinting revealed significant differences in the composition of individual microbial communities, indicating multiple steady-states. Furthermore, cellulolytic enzymes and cellulosomal proteins of Clostridium thermocellum were identified to be specific markers for the thermophilic reactors (R3, R4). Metaproteins found in R3 indicated hydrogenothrophic methanogenesis, whereas metaproteins of acetoclastic methanogenesis were identified in R4. This suggests not only an individual evolution of microbial communities even for the case that BGPs are started at the same initial conditions under well controlled environmental conditions, but also a high compositional variance of microbiomes under
A Rich Vehicle Routing Problem with Multiple Trips and Driver Shifts
Arda, Yasemin; Crama, Yves; Kucukaydin, Hande; Talla Nobibon, Fabrice
2012-01-01
This study is concerned with a rich vehicle routing problem (RVRP) encountered at a Belgian transportation company in charge of servicing supermarkets and hypermarkets belonging to a franchise. The studied problem can be classified as a one-to-many-to-one pick-up and delivery problem, where there is a single depot from which all delivery customers are served and to which every pick-up demand must be carried back (Gutiérrez-Jarpa et al., 2010). The delivery and backhaul customers are considere...
Solution matching for a three-point boundary-value problem on atime scale
Directory of Open Access Journals (Sweden)
Martin Eggensperger
2004-07-01
Full Text Available Let $mathbb{T}$ be a time scale such that $t_1, t_2, t_3 in mathbb{T}$. We show the existence of a unique solution for the three-point boundary value problem $$displaylines{ y^{DeltaDeltaDelta}(t = f(t, y(t, y^Delta(t, y^{DeltaDelta}(t, quad t in [t_1, t_3] cap mathbb{T},cr y(t_1 = y_1, quad y(t_2 = y_2, quad y(t_3 = y_3,. }$$ We do this by matching a solution to the first equation satisfying a two-point boundary conditions on $[t_1, t_2] cap mathbb{T}$ with a solution satisfying a two-point boundary conditions on $[t_2, t_3] cap mathbb{T}$.
International Nuclear Information System (INIS)
Klibanov, Michael V; Pantong, Natee; Fiddy, Michael A; Schenk, John; Beilina, Larisa
2010-01-01
A globally convergent algorithm by the first and third authors for a 3D hyperbolic coefficient inverse problem is verified on experimental data measured in the picosecond scale regime. Quantifiable images of dielectric abnormalities are obtained. The total measurement timing of a 100 ps pulse for one detector location was 1.2 ns with 20 ps (=0.02 ns) time step between two consecutive readings. Blind tests have consistently demonstrated an accurate imaging of refractive indexes of dielectric abnormalities. At the same time, it is shown that a modified gradient method is inapplicable to this kind of experimental data. This inverse algorithm is also applicable to other types of imaging modalities, e.g. acoustics. Potential applications are in airport security, imaging of land mines, imaging of defects in non-distractive testing, etc
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Energy Technology Data Exchange (ETDEWEB)
Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
Directory of Open Access Journals (Sweden)
Rubén Iván Bolaños
2015-06-01
Full Text Available This paper considers a multi-objective version of the Multiple Traveling Salesman Problem (MOmTSP. In particular, two objectives are considered: the minimization of the total traveled distance and the balance of the working times of the traveling salesmen. The problem is formulated as an integer multi-objective optimization model. A non-dominated sorting genetic algorithm (NSGA-II is proposed to solve the MOmTSP. The solution scheme allows one to find a set of ordered solutions in Pareto fronts by considering the concept of dominance. Tests on real world instances and instances adapted from the literature show the effectiveness of the proposed algorithm.
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Beison, Ashley; Rademacher, David J.
2017-01-01
Background and aims Smartphones are ubiquitous. As smartphones increased in popularity, researchers realized that people were becoming dependent on their smartphones. The purpose here was to provide a better understanding of the factors related to problematic smartphone use (PSPU). Methods The participants were 100 undergraduates (25 males, 75 females) whose ages ranged from 18 to 23 (mean age = 20 years). The participants completed questionnaires to assess gender, ethnicity, year in college, father’s education level, mother’s education level, family income, age, family history of alcoholism, and PSPU. The Family Tree Questionnaire assessed family history of alcoholism. The Mobile Phone Problem Use Scale (MPPUS) and the Adapted Cell Phone Addiction Test (ACPAT) were used to determine the degree of PSPU. Whereas the MPPUS measures tolerance, escape from other problems, withdrawal, craving, and negative life consequences, the ACPAT measures preoccupation (salience), excessive use, neglecting work, anticipation, lack of control, and neglecting social life. Results Family history of alcoholism and father’s education level together explained 26% of the variance in the MPPUS scores and 25% of the variance in the ACPAT scores. The inclusion of mother’s education level, ethnicity, family income, age, year in college, and gender did not significantly increase the proportion of variance explained for either MPPUS or ACPAT scores. Discussion and conclusions Family history of alcoholism and father’s education level are good predictors of PSPU. As 74%–75% of the variance in PSPU scale scores was not explained, future studies should aim to explain this variance. PMID:28316252
Multiple Depots Vehicle Routing Problem in the Context of Total Urban Traffic Equilibrium
Chen, Dongxu; Yang, Zhongzhen
2017-01-01
A multidepot VRP is solved in the context of total urban traffic equilibrium. Under the total traffic equilibrium, the multidepot VRP is changed to GDAP (the problem of Grouping Customers + Estimating OD Traffic + Assigning traffic) and bilevel programming is used to model the problem, where the upper model determines the customers that each truck visits and adds the trucks’ trips to the initial OD (Origin/Destination) trips, and the lower model assigns the OD trips to road network. Feedback ...
Janine Ruegg; Walter K. Dodds; Melinda D. Daniels; Ken R. Sheehan; Christina L. Baker; William B. Bowden; Kaitlin J. Farrell; Michael B. Flinn; Tamara K. Harms; Jeremy B. Jones; Lauren E. Koenig; John S. Kominoski; William H. McDowell; Samuel P. Parker; Amy D. Rosemond; Matt T. Trentman; Matt Whiles; Wilfred M. Wollheim
2016-01-01
ContextSpatial scaling of ecological processes is facilitated by quantifying underlying habitat attributes. Physical and ecological patterns are often measured at disparate spatial scales limiting our ability to quantify ecological processes at broader spatial scales using physical attributes.
Directory of Open Access Journals (Sweden)
Ángel Vázquez Alonso
2005-05-01
Full Text Available The scarce attention to assessment and evaluation in science education research has been especially harmful for Science-Technology-Society (STS education, due to the dialectic, tentative, value-laden, and controversial nature of most STS topics. To overcome the methodological pitfalls of the STS assessment instruments used in the past, an empirically developed instrument (VOSTS, Views on Science-Technology-Society have been suggested. Some methodological proposals, namely the multiple response models and the computing of a global attitudinal index, were suggested to improve the item implementation. The final step of these methodological proposals requires the categorization of STS statements. This paper describes the process of categorization through a scaling procedure ruled by a panel of experts, acting as judges, according to the body of knowledge from history, epistemology, and sociology of science. The statement categorization allows for the sound foundation of STS items, which is useful in educational assessment and science education research, and may also increase teachers’ self-confidence in the development of the STS curriculum for science classrooms.
Directory of Open Access Journals (Sweden)
Mahnaz Saeidi
2012-11-01
Full Text Available This study aimed to translate MIDAS questionnaire from English into Persian and determine its content validity and reliability. MIDAS was translated and validated on a sample (N = 110 of Iranian adult population. The participants were both male and female with the age range of 17-57. They were at different educational levels and from different ethnic groups in Iran. A translating team, consisting of five members, bilingual in English and Persian and familiar with multiple intelligences (MI theory and practice, were involved in translating and determining content validity, which included the processes of forward translation, back-translation, review, final proof-reading, and testing. The statistical analyses of inter-scale correlation were performed using the Cronbach's alpha coefficient. In an intra-class correlation, the Cronbach's alpha was high for all of the questions. Translation and content validity of MIDAS questionnaire was completed by a proper process leading to high reliability and validity. The results suggest that Persian MIDAS (P-MIDAS could serve as a valid and reliable instrument for measuring Iranian adults MIs.
Energy Technology Data Exchange (ETDEWEB)
Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry
2016-07-26
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.
Jesussek, Mathias; Ellermann, Katrin
2014-12-01
Reliability and dependability in complex mechanical systems can be improved by fault detection and isolation (FDI) methods. These techniques are key elements for maintenance on demand, which could decrease service cost and time significantly. This paper addresses FDI for a railway vehicle: the mechanical model is described as a multibody system, which is excited randomly due to track irregularities. Various parameters, like masses, spring- and damper-characteristics, influence the dynamics of the vehicle. Often, the exact values of the parameters are unknown and might even change over time. Some of these changes are considered critical with respect to the operation of the system and they require immediate maintenance. The aim of this work is to detect faults in the suspension system of the vehicle. A Kalman filter is used in order to estimate the states. To detect and isolate faults the detection error is minimised with multiple Kalman filters. A full-scale train model with nonlinear wheel/rail contact serves as an example for the described techniques. Numerical results for different test cases are presented. The analysis shows that for the given system it is possible not only to detect a failure of the suspension system from the system's dynamic response, but also to distinguish clearly between different possible causes for the changes in the dynamical behaviour.
Buried interfaces - A systematic study to characterize an adhesive interface at multiple scales
Haubrich, Jan; Löbbecke, Miriam; Watermeyer, Philipp; Wilde, Fabian; Requena, Guillermo; da Silva, Julio
2018-03-01
A comparative study of a model adhesive interface formed between laser-pretreated Ti15-3-3-3 and the thermoplastic polymer PEEK has been carried out in order to characterize the interfaces' structural details and the infiltration of the surface nano-oxide by the polymer at multiple scales. Destructive approaches such as scanning and transmission electron microscopy of microsections prepared by focused ion beam, and non-destructive imaging approaches including laser scanning and scanning electron microscopy of pretreated surfaces as well as synchrotron computed tomography techniques (micro- and ptychographic tomographies) were employed for resolving the large, μm-sized melt-structures and the fine nano-oxide substructure within the buried interface. Scanning electron microscopy showed that the fine, open-porous nano-oxide homogeneously covers the larger macrostructure features which in turn cover the joint surface. The open-porous nano-oxide forming the interface itself appears to be fully infiltrated and wetted by the polymer. No voids or even channels were detected down to the respective resolution limits of scanning and transmission electron microscopy.
Examining the Psychometric Quality of Multiple-Choice Assessment Items using Mokken Scale Analysis.
Wind, Stefanie A
The concept of invariant measurement is typically associated with Rasch measurement theory (Engelhard, 2013). Concerned with the appropriateness of the parametric transformation upon which the Rasch model is based, Mokken (1971) proposed a nonparametric procedure for evaluating the quality of social science measurement that is theoretically and empirically related to the Rasch model. Mokken's nonparametric procedure can be used to evaluate the quality of dichotomous and polytomous items in terms of the requirements for invariant measurement. Despite these potential benefits, the use of Mokken scaling to examine the properties of multiple-choice (MC) items in education has not yet been fully explored. A nonparametric approach to evaluating MC items is promising in that this approach facilitates the evaluation of assessments in terms of invariant measurement without imposing potentially inappropriate transformations. Using Rasch-based indices of measurement quality as a frame of reference, data from an eighth-grade physical science assessment are used to illustrate and explore Mokken-based techniques for evaluating the quality of MC items. Implications for research and practice are discussed.
Solving the problem of imaging resolution: stochastic multi-scale image fusion
Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril
2016-04-01
Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Wickerhauser, M.V.
2007-01-01
In the perspective of optimizing the control and operation of large scale process plants, it is important to detect and to locate oscillations in the plants. This paper presents a scheme for detecting and localizing multiple oscillations in multiple measurements from such a large-scale power plant....... The scheme is based on a Karhunen-Lo\\`{e}ve analysis of the data from the plant. The proposed scheme is subsequently tested on two sets of data: a set of synthetic data and a set of data from a coal-fired power plant. In both cases the scheme detects the beginning of the oscillation within only a few samples....... In addition the oscillation localization has also shown its potential by localizing the oscillations in both data sets....
Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K.; Erlen, Judith A.
2015-01-01
Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem-solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simul...
Directory of Open Access Journals (Sweden)
María-Teresa Sebastiá-Frasquet
2014-03-01
Full Text Available The policies that define the use and management of wetlands in Spain have undergone tremendous changes in recent decades. During the period of 1950–1980, Land Reform Plans promoted filling and draining of these areas for agricultural use. In 1986, with the incorporation of Spain to the European Union (EU, there was a sudden change of direction in these policies, which, thereafter, pursued restoring and protecting these ecosystems. This change, combined with increasing urban development and infrastructure pressures (e.g., roads, golf courses, etc., creates a conflict of uses which complicates the management of these ecosystems by local governments. This study analyzes the effectiveness of policies and management tools of important coastal wetlands at the local scale in the Valencian Community (Western Mediterranean Sea using a strengths-weaknesses-opportunities-threats (SWOT methodology. A supra-municipal model of environmental planning is proposed to enable consistent management at a regional scale. This model enhances local government’s effectiveness and it can be applied in other areas with similar problems.
Directory of Open Access Journals (Sweden)
Keisuke Fujisaki
2013-11-01
Full Text Available To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model and the homogeneous model (macro-model. However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity.
Ovington, Linda A.; Saliba, Anthony J.; Goldring, Jeremy
2016-01-01
This article reports the development of a brief self-report measure of dispositional insight problem solving, the Dispositional Insight Scale (DIS). From a representative Australian database, 1,069 adults (536 women and 533 men) completed an online questionnaire. An exploratory and confirmatory factor analysis revealed a 5-item scale, with all…
Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science
Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.
2008-12-01
The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.
Energy Technology Data Exchange (ETDEWEB)
Leonard, P.J.; Lai, H.C.; Eastham, J.F.; Al-Akayshee, Q.H. [Univ. of Bath (United Kingdom)
1996-05-01
This paper describes an efficient scheme for incorporating multiple wire wound coils into 3D finite element models. The scheme is based on the magnetic scalar representation with an additional basis for each coil. There are no restrictions on the topology of coils with respect to ferromagnetic and conductor regions. Reduced scalar regions and cuts are automatically generated.
Knorth, Erik J.; Knot-Dickscheit, Jana; Thoburn, June
2015-01-01
Recently, there has been growing interest amongst researchers, practitioners and policy-makers in approaches to understanding and ways of helping parents, children and the communities in which they live to respond to ‘families experiencing multiple problems’ (FEMPs). There is a strong need for
International Nuclear Information System (INIS)
Rogers, P.M.; Stone, R.; Lu, A.H.
1985-01-01
The Basalt Waste Isolation Project is preparing plans for tests and has begun work on some tests that will provide the data necessary for the hydrogeologic characterization of a site located on a United States government reservation at Hanford, Washington. This site is being considered for the Nation's first geologic repository of high level nuclear waste. Hydrogeologic characterization of this site requires several lines of investigation which include: surface-based small-scale tests, testing performed at depth from an exploratory shaft, geochemistry investigations, regional studies, and site-specific investigations using large-scale, multiple-well hydraulic tests. The large-scale multiple-well tests are planned for several locations in and around the site. These tests are being designed to provide estimates of hydraulic parameter values of the geologic media, chemical properties of the groundwater, and hydrogeologic boundary conditions at a scale appropriate for evaluating repository performance with respect to potential radionuclide transport
Peñaloza López, Yolanda Rebeca; Orozco Peña, Xóchitl Daisy; Pérez Ruiz, Santiago Jesús
2018-04-03
To evaluate the central auditory processing disorders in patients with multiple sclerosis, emphasizing auditory laterality by applying psychoacoustic tests and to identify their relationship with the Multiple Sclerosis Disability Scale (EDSS) functions. Depression scales (HADS), EDSS, and 9 psychoacoustic tests to study CAPD were applied to 26 individuals with multiple sclerosis and 26 controls. Correlation tests were performed between the EDSS and psychoacoustic tests. Seven out of 9 psychoacoustic tests were significantly different (P<.05); right or left (14/19 explorations) with respect to control. In dichotic digits there was a left-ear advantage compared to the usual predominance of RDD. There was significant correlation in five psychoacoustic tests and the specific functions of EDSS. The left-ear advantage detected and interpreted as an expression of deficient influences of the corpus callosum and attention in multiple sclerosis should be investigated. There was a correlation between psychoacoustic tests and specific EDSS functions. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
van Jaarsveld, A.S; Biggs, R; Scholes, R.J; Bohensky, E; Reyers, B; Lynam, T; Musvoto, C; Fabricius, C
2005-01-01
The Southern African Millennium Ecosystem Assessment (SAfMA) evaluated the relationships between ecosystem services and human well-being at multiple scales, ranging from local through to sub-continental. Trends in ecosystem services (fresh water, food, fuel-wood, cultural and biodiversity) over the period 1990-2000 were mixed across scales. Freshwater resources appear strained across the continent with large numbers of people not securing adequate supplies, especially of good quality water. T...
International Nuclear Information System (INIS)
Perlt, H.
1980-01-01
Scale breaking quark and gluon fragmentation functions obtained by solving numerically Altarelli-Parisi type equations are presented. Analytical parametrizations are given for the fragmentation of u and d quarks into pions. The calculated Q 2 dependent fragmentation functions are compared with experimental data. With these scale breaking fragmentation functions the average charged multiplicity is calculated in e + e - annihilation, which rises with energy more than logarithmically and is in good agreement with experiment. (author)
Rutland, J Brian; Sheets, Tilman; Young, Tony
2007-12-01
This exploratory study examines a subset of mobile phone use, the compulsive use of short message service (SMS) text messaging. A measure of SMS use, the SMS Problem Use Diagnostic Questionnaire (SMS-PUDQ), was developed and found to possess acceptable reliability and validity when compared to other measures such as self-reports of time spent using SMS and scores on a survey of problem mobile phone use. Implications for the field of addiction research, technological and behavioral addictions in particular, are discussed, and directions for future research are suggested.
Solving inverse problems through a smooth formulation of multiple-point geostatistics
DEFF Research Database (Denmark)
Melnikova, Yulia
be inferred, for instance, from a conceptual geological model termed a training image.The main motivation for this study was the challenge posed by history matching, an inverse problem aimed at estimating rock properties from production data. We addressed two main difficulties of the history matching problem...... corresponding inverse problems. However, noise in data, non-linear relationships and sparse observations impede creation of realistic reservoir models. Including complex a priori information on reservoir parameters facilitates the process of obtaining acceptable solutions. Such a priori knowledge may...... strategies including both theoretical motivation and practical aspects of implementation. Finally, it is complemented by six research papers submitted, reviewed and/or published in the period 2010 - 2013....
Multiple solutions of a free-boundary FRC equilibrium problem in a metal cylinder
International Nuclear Information System (INIS)
Spencer, R.L.; Hewett, D.W.
1981-01-01
A new approach to the computation of FRC equilibria that avoids previously encountered difficulties is presented. For arbitrary pressure profiles it is computationally expensive, but for one special pressure profile the problem is simple enough to require only minutes of Cray time; it is this problem that we have solved. We solve the Grad-Shafranov equation, Δ/sup */psi = r 2 p'(psi), in an infinitely long flux conserving cylinder of radius a with the boundary conditions that psi(a,z) = -psi/sub w/ and that delta psi/delta z = 0 as [z] approaches infinity. The pressure profile is p'(psi) = cH(psi) where c is a constant and where H(x) is the Heaviside function. We have found four solutions to this problem: There is a purely vacuum state, two z-independent plasma solutions, and an r-z-dependent plasma state
Multiple Depots Vehicle Routing Problem in the Context of Total Urban Traffic Equilibrium
Directory of Open Access Journals (Sweden)
Dongxu Chen
2017-01-01
Full Text Available A multidepot VRP is solved in the context of total urban traffic equilibrium. Under the total traffic equilibrium, the multidepot VRP is changed to GDAP (the problem of Grouping Customers + Estimating OD Traffic + Assigning traffic and bilevel programming is used to model the problem, where the upper model determines the customers that each truck visits and adds the trucks’ trips to the initial OD (Origin/Destination trips, and the lower model assigns the OD trips to road network. Feedback between upper model and lower model is iterated through OD trips; thus total traffic equilibrium can be simulated.
The Time-Dependent Multiple-Vehicle Prize-Collecting Arc Routing Problem
DEFF Research Database (Denmark)
Black, Daniel; Eglese, Richard; Wøhlk, Sanne
2015-01-01
-life traffic situations where the travel times change with the time of day are taken into account. Two metaheuristic algorithms, one based on Variable Neighborhood Search and one based on Tabu Search, are proposed and tested for a set of benchmark problems, generated from real road networks and travel time......In this paper, we introduce a multi vehicle version of the Time-Dependent Prize-Collecting Arc Routing Problem (TD-MPARP). It is inspired by a situation where a transport manager has to choose between a number of full truck load pick-ups and deliveries to be performed by a fleet of vehicles. Real...
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
International Nuclear Information System (INIS)
Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng
2013-01-01
X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future
The Mercury Problem in Artisanal and Small-Scale Gold Mining.
Esdaile, Louisa J; Chalker, Justin M
2018-05-11
Mercury-dependent artisanal and small-scale gold mining (ASGM) is the largest source of mercury pollution on Earth. In this practice, elemental mercury is used to extract gold from ore as an amalgam. The amalgam is typically isolated by hand and then heated-often with a torch or over a stove-to distill the mercury and isolate the gold. Mercury release from tailings and vaporized mercury exceed 1000 tonnes each year from ASGM. The health effects on the miners are dire, with inhaled mercury leading to neurological damage and other health issues. The communities near these mines are also affected due to mercury contamination of water and soil and subsequent accumulation in food staples, such as fish-a major source of dietary protein in many ASGM regions. The risks to children are also substantial, with mercury emissions from ASGM resulting in both physical and mental disabilities and compromised development. Between 10 and 19 million people use mercury to mine for gold in more than 70 countries, making mercury pollution from ASGM a global issue. With the Minamata Convention on Mercury entering force this year, there is political motivation to help overcome the problem of mercury in ASGM. In this effort, chemists can play a central role. Here, the problem of mercury in ASGM is reviewed with a discussion on how the chemistry community can contribute solutions. Introducing portable and low-cost mercury sensors, inexpensive and scalable remediation technologies, novel methods to prevent mercury uptake in fish and food crops, and efficient and easy-to-use mercury-free mining techniques are all ways in which the chemistry community can help. To meet these challenges, it is critical that new technologies or techniques are low-cost and adaptable to the remote and under-resourced areas in which ASGM is most common. The problem of mercury pollution in ASGM is inherently a chemistry problem. We therefore encourage the chemistry community to consider and address this issue that
Time evolution and use of multiple times in the N-body problem
International Nuclear Information System (INIS)
McGuire, J.H.; Godunov, A.L.
2003-01-01
Under certain conditions it is possible to describe time evolution using different times for different particles. Use of multiple times is optional in the independent particle approximation, where interparticle interactions are removed, and the N-particle evolution operator factors into N single-particle evolution operators. In this limit one may use either a single time, with a single energy-time Fourier transform, or N different times with a different energy-time transform for each particle. The use of different times for different particles is fully justified when coherence between single-particle amplitudes is lost, e.g., if relatively strong randomly fluctuating residual fields influence each particle independently. However, when spatial correlation is present the use of multiple times is not feasible, even when the evolution of the particles is uncorrelated in time. Some calculations in simple atomic systems with and without spatial and temporal correlation between different electrons are included
Directory of Open Access Journals (Sweden)
Qiong Liu
2012-01-01
Full Text Available We study the following fourth-order elliptic equations: Δ2+Δ=(,,∈Ω,=Δ=0,∈Ω, where Ω⊂ℝ is a bounded domain with smooth boundary Ω and (, is asymptotically linear with respect to at infinity. Using an equivalent version of Cerami's condition and the symmetric mountain pass lemma, we obtain the existence of multiple solutions for the equations.
International Nuclear Information System (INIS)
Nishio, Masamichi; Myojin, Miyako; Nishiyama, Noriaki; Taguchi, Hiroshi; Takagi, Masaru; Tanaka, Katsuhiko
2003-01-01
A total of 2144 head and neck cancers were treated by radiotherapy at the National Sapporo Hospital between 1974 and 2001. Of these, 313 (14.6%) were found to have other primary cancers besides head and neck cancer, in which double cancers were 79% and triple or more cancers were 21%. Frequency according to primary site of the first head and neck cancer was oral cavity: 107/603 (17.7%), epipharynx cancer: 7/117 (6.0%), oropharyngeal cancer: 63/257 (24.5%), hypopharyngeal cancer: 65/200 (32.5%), laryngeal cancer: 114/558 (20.4%), and nose/paranasal sinus: 4.9% respectively. Esophageal cancer, head and neck cancer, lung cancer and gastric cancer were very frequent as other primary sites combined with the head and neck. The first onset region was the head and neck in 233 out of 313 cases with multiple primary cancers. The five-year survival rate from the onset of head and neck cancers is 52%, 10-year: 30%, and 5-year cause-specific survival rate 82%, and 10-year: 78%, respectively. The treatment possibilities in multiple primary cancers tend to be limited because the treatment areas are sometimes overlapped. New approaches to the treatment of multiple primary cancers should be considered in the future. (author)
A.A.M. Crijnen (Alfons); T.M. Achenbach (Thomas); F.C. Verhulst (Frank)
1999-01-01
textabstractOBJECTIVE: The purpose of this study was to compare syndromes of parent-reported problems for children in 12 cultures. METHOD: Child Behavior Checklists were analyzed for 13,697 children and adolescents, ages 6 through 17 years, from general population
The finite horizon economic lot sizing problem in job shops : the multiple cycle approach
Ouenniche, J.; Bertrand, J.W.M.
2001-01-01
This paper addresses the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the planning horizon length is finite and fixed by management. The objective pursued is to minimize the sum of setup costs, and work-in-process and
The Role of Multiple Representations in the Understanding of Ideal Gas Problems
Madden, Sean P.; Jones, Loretta L.; Rahm, Jrene
2011-01-01
This study examined the representational competence of students as they solved problems dealing with the temperature-pressure relationship for ideal gases. Seven students enrolled in a first-semester general chemistry course and two advanced undergraduate science majors participated in the study. The written work and transcripts from videotaped…
Engelbrecht, Jeffrey C.
2003-01-01
Delivering content to distant users located in dispersed networks, separated by firewalls and different web domains requires extensive customization and integration. This article outlines some of the problems of implementing the Sharable Content Object Reference Model (SCORM) in the Marine Corps' Distance Learning System (MarineNet) and extends…
Vendlinski, Matthew K.; Lemery-Chalfant, Kathryn; Essex, Marilyn J.; Goldsmith, H. Hill
2011-01-01
Background: Identifying how genetic risk interacts with experience to predict psychopathology is an important step toward understanding the etiology of mental health problems. Few studies have examined genetic risk by experience interaction (GxE) in the development of childhood psychopathology. Methods: We used both co-twin and parent mental…
Reed, S.; Cleveland, C. C.; Davidson, E. A.; Townsend, A. R.
2013-12-01
During leaf senescence, nutrient rich compounds are transported to other parts of the plant and this 'resorption' recycles nutrients for future growth, reducing losses of potentially limiting nutrients. Variations in leaf chemistry resulting from nutrient resorption also directly affect litter quality, in turn, regulating decomposition rates and soil nutrient availability. Here we investigated stoichiometric patterns of nitrogen (N) and phosphorus (P) resorption efficiency at multiple spatial scales. First, we assembled a global database to explore nutrient resorption among and within biomes and to examine potential relationships between resorption stoichiometry and ecosystem nutrient status. Next, we used a forest regeneration chronosequence in Brazil to assess how resorption stoichiometry linked with a suite of other nutrient cycling measures and with ideas of how nutrient limitation may change over secondary forest regrowth. Finally, we measured N:P resorption ratios of six canopy tree species in a Costa Rican tropical forest. We calculated species-specific resorption ratios and compared them with patterns in leaf litter and topsoil nutrient concentrations. At the global scale, N:P resorption ratios increased with latitude and decreased with mean annual temperature (MAT) and precipitation (MAP; P1 in latitudes >23°. Focusing on tropical sites in our global dataset we found that, despite fewer data and a restricted latitudinal range, a significant relationship between latitude and N:P resorption ratios persisted (PAmazon Basin chronosequence of regenerating forests, where previous work reported a transition from apparent N limitation in younger forests to P limitation in mature forests, we found N resorption was highest in the youngest forest, whereas P resorption was greatest in the mature forest. Over the course of succession, N resorption efficiency leveled off but P resorption continued to increase with forest age. In Costa Rica, though we found species
Coates, Victoria; Pattison, Ian; Sander, Graham
2016-04-01
England's rural landscape is dominated by pastoral agriculture, with 40% of land cover classified as either improved or semi-natural grassland according to the Land Cover Map 2007. Since the Second World War the intensification of agriculture has resulted in greater levels of soil compaction, associated with higher stocking densities in fields. Locally compaction has led to loss of soil storage and an increased in levels of ponding in fields. At the catchment scale soil compaction has been hypothesised to contribute to increased flood risk. Previous research (Pattison, 2011) on a 40km2 catchment (Dacre Beck, Lake District, UK) has shown that when soil characteristics are homogeneously parameterised in a hydrological model, downstream peak discharges can be 65% higher for a heavy compacted soil than for a lightly compacted soil. However, at the catchment scale there is likely to be a significant amount of variability in compaction levels within and between fields, due to multiple controlling factors. This research focusses in on one specific type of land use (permanent pasture with cattle grazing) and areas of activity within the field (feeding area, field gate, tree shelter, open field area). The aim was to determine if the soil characteristics and soil compaction levels are homogeneous in the four areas of the field. Also, to determine if these levels stayed the same over the course of the year, or if there were differences at the end of the dry (October) and wet (April) periods. Field experiments were conducted in the River Skell catchment, in Yorkshire, UK, which has an area of 120km2. The dynamic cone penetrometer was used to determine the structural properties of the soil, soil samples were collected to assess the bulk density, organic matter content and permeability in the laboratory and the Hydrosense II was used to determine the soil moisture content in the topsoil. Penetration results show that the tree shelter is the most compacted and the open field area
Mapping compound cosmic telescopes containing multiple projected cluster-scale halos
Energy Technology Data Exchange (ETDEWEB)
Ammons, S. Mark [Lawrence Livermore National Laboratory, Physics Division L-210, 7000 East Ave., Livermore, CA 94550 (United States); Wong, Kenneth C. [EACOA Fellow, Institute of Astronomy and Astrophysics, Academia Sinica (ASIAA), Taipei 10641, Taiwan (China); Zabludoff, Ann I. [Steward Observatory, University of Arizona, 933 Cherry Ave., Tucson, AZ 85721 (United States); Keeton, Charles R., E-mail: ammons1@llnl.gov, E-mail: kwong@as.arizona.edu, E-mail: aiz@email.arizona.edu, E-mail: keeton@physics.rutgers.edu [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States)
2014-01-20
Lines of sight with multiple projected cluster-scale gravitational lenses have high total masses and complex lens plane interactions that can boost the area of magnification, or étendue, making detection of faint background sources more likely than elsewhere. To identify these new 'compound' cosmic telescopes, we have found directions in the sky with the highest integrated mass densities, as traced by the projected concentrations of luminous red galaxies (LRGs). We use new galaxy spectroscopy to derive preliminary magnification maps for two such lines of sight with total mass exceeding ∼3 × 10{sup 15} M {sub ☉}. From 1151 MMT Hectospec spectra of galaxies down to i {sub AB} = 21.2, we identify two to three group- and cluster-scale halos in each beam. These are well traced by LRGs. The majority of the mass in beam J085007.6+360428 (0850) is contributed by Zwicky 1953, a massive cluster at z = 0.3774, whereas beam J130657.5+463219 (1306) is composed of three halos with virial masses of 6 × 10{sup 14}-2 × 10{sup 15} M {sub ☉}, one of which is A1682. The magnification maps derived from our mass models based on spectroscopy and Sloan Digital Sky Survey photometry alone display substantial étendue: the 68% confidence bands on the lens plane area with magnification exceeding 10 for a source plane of z{sub s} = 10 are [1.2, 3.8] arcmin{sup 2} for 0850 and [2.3, 6.7] arcmin{sup 2} for 1306. In deep Subaru Suprime-Cam imaging of beam 0850, we serendipitously discover a candidate multiply imaged V-dropout source at z {sub phot} = 5.03. The location of the candidate multiply imaged arcs is consistent with the critical curves for a source plane of z = 5.03 predicted by our mass model. Incorporating the position of the candidate multiply imaged galaxy as a constraint on the critical curve location in 0850 narrows the 68% confidence band on the lens plane area with μ > 10 and z{sub s} = 10 to [1.8, 4.2] arcmin{sup 2}, an étendue range comparable to that of
Boyd, John P.; Amore, Paolo; Fernández, Francisco M.
2018-03-01
A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down
Altmoos, Michael; Henle, Klaus
2010-11-01
Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.
Strong and nonlinear effects of fragmentation on ecosystem service provision at multiple scales
Mitchell, Matthew G. E.; Bennett, Elena M.; Gonzalez, Andrew
2015-09-01
Human actions, such as converting natural land cover to agricultural or urban land, result in the loss and fragmentation of natural habitat, with important consequences for the provision of ecosystem services. Such habitat loss is especially important for services that are supplied by fragments of natural land cover and that depend on flows of organisms, matter, or people across the landscape to produce benefits, such as pollination, pest regulation, recreation and cultural services. However, our quantitative knowledge about precisely how different patterns of landscape fragmentation might affect the provision of these types of services is limited. We used a simple, spatially explicit model to evaluate the potential impact of natural land cover loss and fragmentation on the provision of hypothetical ecosystem services. Based on current literature, we assumed that fragments of natural land cover provide ecosystem services to the area surrounding them in a distance-dependent manner such that ecosystem service flow depended on proximity to fragments. We modeled seven different patterns of natural land cover loss across landscapes that varied in the overall level of landscape fragmentation. Our model predicts that natural land cover loss will have strong and unimodal effects on ecosystem service provision, with clear thresholds indicating rapid loss of service provision beyond critical levels of natural land cover loss. It also predicts the presence of a tradeoff between maximizing ecosystem service provision and conserving natural land cover, and a mismatch between ecosystem service provision at landscape versus finer spatial scales. Importantly, the pattern of landscape fragmentation mitigated or intensified these tradeoffs and mismatches. Our model suggests that managing patterns of natural land cover loss and fragmentation could help influence the provision of multiple ecosystem services and manage tradeoffs and synergies between services across different human
Analysis of streamflow variability in Alpine catchments at multiple spatial and temporal scales
Pérez Ciria, T.; Chiogna, G.
2017-12-01
Alpine watersheds play a pivotal role in Europe for water provisioning and for hydropower production. In these catchments, temporal fluctuations of river discharge occur at multiple temporal scales due to natural as well as anthropogenic driving forces. In the last decades, modifications of the flow regime have been observed and their origin lies in the complex interplay between construction of dams for hydro power production, changes in water management policies and climatic changes. The alteration of the natural flow has negative impacts on the freshwater biodiversity and threatens the ecosystem integrity of the Alpine region. Therefore, understanding the temporal and spatial variability of river discharge has recently become a particular concern for environmental protection and represents a crucial contribution to achieve sustainable water resources management in the Alps. In this work, time series analysis is conducted for selected gauging stations in the Inn and the Adige catchments, which cover a large part of the central and eastern region of the Alps. We analyze the available time series using the continuous wavelet transform and change-point analyses for determining how and where changes have taken place. Although both catchments belong to different climatic zones of the Greater Alpine Region, streamflow properties share some similar characteristics. The comparison of the collected streamflow time series in the two catchments permits detecting gradients in the hydrological system dynamics that depend on station elevation, longitudinal location in the Alps and catchment area. This work evidences that human activities (e.g., water management practices and flood protection measures, changes in legislation and market regulation) have major impacts on streamflow and should be rigorously considered in hydrological models.
A catchment scale evaluation of multiple stressor effects in headwater streams.
Rasmussen, Jes J; McKnight, Ursula S; Loinaz, Maria C; Thomsen, Nanna I; Olsson, Mikael E; Bjerg, Poul L; Binning, Philip J; Kronvang, Brian
2013-01-01
Mitigation activities to improve water quality and quantity in streams as well as stream management and restoration efforts are conducted in the European Union aiming to improve the chemical, physical and ecological status of streams. Headwater streams are often characterised by impairment of hydromorphological, chemical, and ecological conditions due to multiple anthropogenic impacts. However, they are generally disregarded as water bodies for mitigation activities in the European Water Framework Directive despite their importance for supporting a higher ecological quality in higher order streams. We studied 11 headwater streams in the Hove catchment in the Copenhagen region. All sites had substantial physical habitat and water quality impairments due to anthropogenic influence (intensive agriculture, urban settlements, contaminated sites and low base-flow due to water abstraction activities in the catchment). We aimed to identify the dominating anthropogenic stressors at the catchment scale causing ecological impairment of benthic macroinvertebrate communities and provide a rank-order of importance that could help in prioritising mitigation activities. We identified numerous chemical and hydromorphological impacts of which several were probably causing major ecological impairments, but we were unable to provide a robust rank-ordering of importance suggesting that targeted mitigation efforts on single anthropogenic stressors in the catchment are unlikely to have substantial effects on the ecological quality in these streams. The SPEcies At Risk (SPEAR) index explained most of the variability in the macroinvertebrate community structure, and notably, SPEAR index scores were often very low (<10% SPEAR abundance). An extensive re-sampling of a subset of the streams provided evidence that especially insecticides were probably essential contributors to the overall ecological impairment of these streams. Our results suggest that headwater streams should be considered in
Directory of Open Access Journals (Sweden)
S. Lari
2012-11-01
Full Text Available The study of the interactions between natural and anthropogenic risks is necessary for quantitative risk assessment in areas affected by active natural processes, high population density and strong economic activities.
We present a multiple quantitative risk assessment on a 420 km^{2} high risk area (Brescia and surroundings, Lombardy, Northern Italy, for flood, seismic and industrial accident scenarios. Expected economic annual losses are quantified for each scenario and annual exceedance probability-loss curves are calculated. Uncertainty on the input variables is propagated by means of three different methodologies: Monte-Carlo-Simulation, First Order Second Moment, and point estimate.
Expected losses calculated by means of the three approaches show similar values for the whole study area, about 64 000 000 € for earthquakes, about 10 000 000 € for floods, and about 3000 € for industrial accidents. Locally, expected losses assume quite different values if calculated with the three different approaches, with differences up to 19%.
The uncertainties on the expected losses and their propagation, performed with the three methods, are compared and discussed in the paper. In some cases, uncertainty reaches significant values (up to almost 50% of the expected loss. This underlines the necessity of including uncertainty in quantitative risk assessment, especially when it is used as a support for territorial planning and decision making. The method is developed thinking at a possible application at a regional-national scale, on the basis of data available in Italy over the national territory.
International Nuclear Information System (INIS)
Marceau, R.K.W.; Stephenson, L.T.; Hutchinson, C.R.; Ringer, S.P.
2011-01-01
A model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered in this study. It has recently been shown that the addition of the GP zones to such microstructures can lead to significant increases in strength without a decrease in the uniform elongation. In this study, atom probe tomography (APT) has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. Recent nuclear magnetic resonance (NMR) analysis has clearly shown strain-induced dissolution of the GP zones, which is supported by the current APT data with additional spatial information. There is significant repartitioning of Cu from the GP zones into the solid solution during deformation. A new approach for cluster finding in APT data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features in the solid solution solute as a function of applied strain. -- Research highlights: → A new approach for cluster finding in atom probe tomography (APT) data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features with multiple length scales. → In this study, a model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered. → APT has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. → It is clearly shown that there is strain-induced dissolution of the GP zones with significant repartitioning of Cu from the GP zones into the solid solution during deformation.
Gerst, K.; Enquist, C.; Rosemartin, A.; Denny, E. G.; Marsh, L.; Moore, D. J.; Weltzin, J. F.
2014-12-01
The USA National Phenology Network (USA-NPN; www.usanpn.org) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and environmental change. The National Phenology Database maintained by USA-NPN now has over 3.7 million records for plants and animals for the period 1954-2014, with the majority of these observations collected since 2008 as part of a broad, national contributory science strategy. These data have been used in a number of science, conservation and resource management applications, including national assessments of historical and potential future trends in phenology, regional assessments of spatio-temporal variation in organismal activity, and local monitoring for invasive species detection. Customizable data downloads are freely available, and data are accompanied by FGDC-compliant metadata, data-use and data-attribution policies, vetted and documented methodologies and protocols, and version control. While users are free to develop custom algorithms for data cleaning, winnowing and summarization prior to analysis, the National Coordinating Office of USA-NPN is developing a suite of standard data products to facilitate use and application by a diverse set of data users. This presentation provides a progress report on data product development, including: (1) Quality controlled raw phenophase status data; (2) Derived phenometrics (e.g. onset, duration) at multiple scales; (3) Data visualization tools; (4) Tools to support assessment of species interactions and overlap; (5) Species responsiveness to environmental drivers; (6) Spatially gridded phenoclimatological products; and (7) Algorithms for modeling and forecasting future phenological responses. The prioritization of these data products is a direct response to stakeholder needs related to informing management and policy decisions. We anticipate that these products will contribute to broad understanding of plant
DEFF Research Database (Denmark)
Grandorf Bak, Urd; Mols-Mortensen, Agnes; Gregersen, Olavur
2018-01-01
was conducted. The total cost per kg dw of cultivated S. latissima decreased when the number of possible harvests without re-seeding was increased (from € 36.73 to € 9.27). This work has demonstrated that large-scale kelp cultivation is possible using multiple partial harvesting in the Faroe Islands...
Lucas-Carrasco, Ramona; Sastre-Garriga, Jaume; Galan, Ingrid; Den Oudsten, Brenda L.; Power, Michael J.
2014-01-01
Purpose: To assess Life Satisfaction, using the Satisfaction with Life Scale (SWLS), and to analyze its psychometric properties in Multiple Sclerosis (MS). Method: Persons with MS (n = 84) recruited at the MS Centre of Catalonia (Spain) completed a battery of subjective assessments including the
Rienstra, S.W.; Eversman, W.
2001-01-01
An explicit, analytical, multiple-scales solution for modal sound transmission through slowly varying ducts with mean flow and acoustic lining is tested against a numerical finite-element solution solving the same potential flow equations. The test geometry taken is representative of a high-bypass
Ge Sun; Steven McNulty; Jianbiao Lu; James Vose; Devendra Amayta; Guoyi Zhou; Zhiqiang Zhang
2006-01-01
Watershed management and restoration practices require a clear understanding of the basic eco-hydrologic processes and ecosystem responses to disturbances at multiple scales (Bruijnzeel, 2004; Scott et al., 2005). Worldwide century-long forest hydrologic research has documented that deforestation and forestation (i.e. reforestation and afforestation) can have variable...
DEFF Research Database (Denmark)
Langeskov-Christensen, D; Feys, P; Baert, I
2017-01-01
BACKGROUND: The severity of walking impairment in persons with multiple sclerosis (pwMS) at different levels on the expanded disability status scale (EDSS) is unclear. Furthermore, it is unclear if the EDSS is differently related to performed- and perceived walking capacity tests. AIMS: To quantify...
Tabrizi, Babak H.; Ghaderi, Seyed Farid
2016-09-01
Simultaneous planning of project scheduling and material procurement can improve the project execution costs. Hence, the issue has been addressed here by a mixed-integer programming model. The proposed model facilitates the procurement decisions by accounting for a number of suppliers offering a distinctive discount formula from which to purchase the required materials. It is aimed at developing schedules with the best net present value regarding the obtained benefit and costs of the project execution. A genetic algorithm is applied to deal with the problem, in addition to a modified version equipped with a variable neighbourhood search. The underlying factors of the solution methods are calibrated by the Taguchi method to obtain robust solutions. The performance of the aforementioned methods is compared for different problem sizes, in which the utilized local search proved efficient. Finally, a sensitivity analysis is carried out to check the effect of inflation on the objective function value.
Directory of Open Access Journals (Sweden)
Long Yuhua
2017-12-01
Full Text Available In this paper, we study second-order nonlinear discrete Robin boundary value problem with parameter dependence. Applying invariant sets of descending flow and variational methods, we establish some new sufficient conditions on the existence of sign-changing solutions, positive solutions and negative solutions of the system when the parameter belongs to appropriate intervals. In addition, an example is given to illustrate our results.
Multiple myeloma in South Cumbria 1974-80: problems of health analysis in small communities
International Nuclear Information System (INIS)
Jessop, E.G.; Horsley, S.D.
1985-01-01
The occurrence of seven cases of multiple myeloma over seven years in a small community 15 miles from a plant reprocessing nuclear fuel caused much local concern. A case control study of 34 confirmed cases in the health district during 1974 to 1980 revealed no excess of known risk factors among the 23 cases for whom informants could be traced. The possible effects of exposure to marine discharges of radioactive material cannot be completely ruled out, but dose estimates make this highly unlikely. Such studies are a necessary response by community physicians to the population they serve but have major practical and theoretical limitations. (author)
Buckner, Julia D; Farris, Samantha G; Schmidt, Norman B; Zvolensky, Michael J
2014-06-01
Little empirical work has evaluated why socially anxious smokers are especially vulnerable to more severe nicotine dependence and cessation failure. Presumably, these smokers rely on cigarettes to help them manage their chronically elevated negative affect elicited by a wide array of social contexts. The current study examined the direct and indirect effects of social anxiety cross-sectionally in regard to a range of smoking processes among 466 treatment-seeking smokers. Negative affect and negative affect reduction motives were examined as mediators of the relations of social anxiety with nicotine dependence and cessation problems. Social anxiety was directly and robustly associated with perceived barriers to smoking cessation and problems experienced during past quit attempts. Social anxiety was also associated with greater nicotine dependence and smoking inflexibility indirectly through negative affect and negative affect smoking motives. Negative affect and smoking to reduce negative affect mediated these relations. These findings document the important role of negative affect and negative affect reduction motives in the relationships of social anxiety with nicotine dependence and cessation problems.
Solutions to a combined problem of excessive hydrogen sulfide in biogas and struvite scaling.
Charles, W; Cord-Ruwisch, R; Ho, G; Costa, M; Spencer, P
2006-01-01
The Woodman Point Wastewater Treatment Plant (WWTP) in Western Australia has experienced two separate problems causing avoidable maintenance costs: the build-up of massive struvite (MgNH4PO4. 6H2O) scaling downstream of the anaerobic digester and the formation of hydrogen sulfide (H2S) levels in the digester gas to levels that compromised gas engine operation and caused high operating costs on the gas scrubber. As both problems hang together with a chemical imbalance in the anaerobic digester, we decided to investigate whether both problems could be (feasibly and economically) addressed by a common solution (such as dosing of iron solutions to precipitate both sulfide and phosphate), or by using separate approaches. Laboratory results showed that, the hydrogen sulfide emission in digesters could be effectively and economically controlled by the addition of iron dosing. Slightly higher than the theoretical value of 1.5 mol of FeCl3 was required to precipitate 1 mol of dissolved sulfide inside the digester. Due to the high concentration of PO4(3-) in the digested sludge liquor, significantly higher iron is required for struvite precipitation. Iron dosing did not appear an economic solution for struvite control via iron phosphate formation. By taking advantage of the natural tendency of struvite formation in the digester liquid, it is possible to reduce the risk of struvite precipitation in and around the sludge-dewatering centrifuge by increasing the pH to precipitate struvite out before passing through the centrifuge. However, as the Mg2+/PO4(3-) molar ratio in digested sludge was low, by increasing the pH alone (using NaOH) the precipitation of PO4(3-) was limited by the amount of cations (Ca2+ and Mg2+) available in the sludge. Although this would reduce struvite precipitation in the centrifuge, it could not significantly reduce PO4(3-) recycling back to the plant. For long-term operation, maximum PO4(3-) reduction should be the ultimate aim to minimise PO4
Single- or multiple-visit endodontics: which technique results in fewest postoperative problems?
Balto, Khaled
2009-01-01
The Cochrane Central Register of Controlled Trials, Medline, Embase, six thesis databases (Networked Digital Library of Theses and Dissertations, Proquest Digital Dissertations, OAIster, Index to Theses, Australian Digital Thesis Program and Dissertation.com) and one conference report database (BIOSIS Previews) were searched. There were no language restrictions. Studies were included if subjects had a noncontributory medical history; underwent nonsurgical root canal treatment during the study; there was comparison between single- and multiple-visit root canal treatment; and if outcome was measured in terms of pain degree or prevalence of flare-up. Data were extracted using a standard data extraction sheet. Because of variations in recorded outcomes and methodological and clinical heterogeneity, a meta-analysis was not carried out, although a qualitative synthesis was presented. Sixteen studies fitted the inclusion criteria in the review, with sample size varying from 60-1012 cases. The prevalence of postoperative pain ranged from 3-58%. The heterogeneity of the included studies was far too great to yield meaningful results from a meta-analysis. Compelling evidence is lacking to indicate any significantly different prevalence of postoperative pain or flare-up following either single- or multiple-visit root canal treatment.
International Nuclear Information System (INIS)
Shafie-khah, M.; Moghaddam, M.P.; Sheikh-El-Eslami, M.K.; Catalão, J.P.S.
2014-01-01
Highlights: • A novel hybrid method based on decomposition of SCUC into QP and BP problems is proposed. • An adapted binary programming and an enhanced dual neural network model are applied. • The proposed EDNN is exactly convergent to the global optimal solution of QP. • An AC power flow procedure is developed for including contingency/security issues. • It is suited for large-scale systems, providing both accurate and fast solutions. - Abstract: This paper presents a novel hybrid method for solving the security constrained unit commitment (SCUC) problem. The proposed formulation requires much less computation time in comparison with other methods while assuring the accuracy of the results. Furthermore, the framework provided here allows including an accurate description of warmth-dependent startup costs, valve point effects, multiple fuel costs, forbidden zones of operation, and AC load flow bounds. To solve the nonconvex problem, an adapted binary programming method and enhanced dual neural network model are utilized as optimization tools, and a procedure for AC power flow modeling is developed for including contingency/security issues, as new contributions to earlier studies. Unlike classical SCUC methods, the proposed method allows to simultaneously solve the unit commitment problem and comply with the network limits. In addition to conventional test systems, a real-world large-scale power system with 493 units has been used to fully validate the effectiveness of the novel hybrid method proposed
Directory of Open Access Journals (Sweden)
Sabri Bensid
2010-04-01
Full Text Available We study the nonlinear elliptic problem with discontinuous nonlinearity $$displaylines{ -Delta u = f(uH(u-mu quadhbox{in } Omega, cr u =h quad hbox{on }partial Omega, }$$ where $H$ is the Heaviside unit function, $f,h$ are given functions and $mu$ is a positive real parameter. The domain $Omega$ is the unit ball in $mathbb{R}^n$ with $ngeq 3$. We show the existence of a positive solution $u$ and a hypersurface separating the region where $-Delta u=0$ from the region where $-Delta u=f(u$. Our method relies on the implicit function theorem and bifurcation analysis.
On a first passage problem in general queueing systems with multiple vacations
Directory of Open Access Journals (Sweden)
Jewgeni H. Dshalalow
1992-01-01
Full Text Available The author studies a generalized single-server queueing system with bulk arrivals and batch service, where the server takes vacations each time the queue level falls below r(≥1 in accordance with the multiple vacation discipline. The input to the system is assumed to be a compound Poisson process modulated by the system and the service is assumed to be state dependent. One of the essential part in the analysis of the system is the employment of new techniques related to the first excess level processes. A preliminary analysis of such processes and recent results of the author on modulated processes enabled the author to obtain all major characteristics for the queueing process explicitly. Various examples and applications are discussed.
webMGR: an online tool for the multiple genome rearrangement problem.
Lin, Chi Ho; Zhao, Hao; Lowcay, Sean Harry; Shahab, Atif; Bourque, Guillaume
2010-02-01
The algorithm MGR enables the reconstruction of rearrangement phylogenies based on gene or synteny block order in multiple genomes. Although MGR has been successfully applied to study the evolution of different sets of species, its utilization has been hampered by the prohibitive running time for some applications. In the current work, we have designed new heuristics that significantly speed up the tool without compromising its accuracy. Moreover, we have developed a web server (webMGR) that includes elaborate web output to facilitate navigation through the results. webMGR can be accessed via http://www.gis.a-star.edu.sg/~bourque. The source code of the improved standalone version of MGR is also freely available from the web site. Supplementary data are available at Bioinformatics online.
Elastic tracking versus neural network tracking for very high multiplicity problems
International Nuclear Information System (INIS)
Harlander, M.; Gyulassy, M.
1991-04-01
A new Elastic Tracking (ET) algorithm is proposed for finding tracks in very high multiplicity and noisy environments. It is based on a dynamical reinterpretation and generalization of the Radon transform and is related to elastic net algorithms for geometrical optimization. ET performs an adaptive nonlinear fit to noisy data with a variable number of tracks. Its numerics is more efficient than that of the traditional Radon or Hough transform method because it avoids binning of phase space and the costly search for valid minima. Spurious local minima are avoided in ET by introducing a time-dependent effective potential. The method is shown to be very robust to noise and measurement error and extends tracking capabilities to much higher track densities than possible via local road finding or even the novel Denby-Peterson neural network tracking algorithms. 12 refs., 2 figs
Neutron multiplication and shielding problems in PWR spent-fuel shipping casks
International Nuclear Information System (INIS)
Devillers, C.
1976-01-01
In order to evaluate the degree of accuracy of computational methods used for the shield design of spent-fuel shipping casks, comparisons were made between biological dose rate calculations and measurements at the surface of a cask carrying three PWR fuel assemblies (the fuel being successively wet and dry). The experimental methods used provide ksub(eff) with an accuracy of 0.024. Neutron multiplication coefficients provided by the APOLLO and DOT-3 codes are located within the uncertainty range of the experimentally derived values. The APOLLO plus DOT codes for neutron source calculations and ANISN plus DOT codes for neutron transmission calculations provide neutron dose rate predictions in agreement with measurements to within 10%. The PEPIN 76 code used for deriving fission product γ-rays and the point kernel code MERCURE 4 treating the γ-ray transmission give γ dose rate predictions that generally differ from measurements by less than 25%
Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K.; Erlen, Judith A.
2016-01-01
Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem-solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simultaneously multiple mediators between caregiver stressors and depression. Results indicate that self-efficacy mediated the pathway from daily hassles to depression. Findings point to the importance of improving self-efficacy in psychosocial interventions for caregivers of older adults with memory loss. PMID:26317766
Tang, Fengyan; Jang, Heejung; Lingler, Jennifer; Tamres, Lisa K; Erlen, Judith A
2015-01-01
Caring for an older adult with memory loss is stressful. Caregiver stress could produce negative outcomes such as depression. Previous research is limited in examining multiple intermediate pathways from caregiver stress to depressive symptoms. This study addresses this limitation by examining the role of self-efficacy, social support, and problem solving in mediating the relationships between caregiver stressors and depressive symptoms. Using a sample of 91 family caregivers, we tested simultaneously multiple mediators between caregiver stressors and depression. Results indicate that self-efficacy mediated the pathway from daily hassles to depression. Findings point to the importance of improving self-efficacy in psychosocial interventions for caregivers of older adults with memory loss.
International Nuclear Information System (INIS)
Shafiq, A.; Meyer, H.E. de; Grosjean, C.C.
1985-01-01
An approximate model based on an improved diffusion-type theory is established for treating multiple synthetic scattering in a homogeneous slab of finite thickness. As in the case of the exact treatment given in the preceding paper (Part I), it appears possible to transform the considered transport problem into an equivalent fictitious one involving multiple isotropic scattering, therefore permitting the application of an established corrected diffusion theory for treating isotropic scattering taking place in a convex homogeneous medium bounded by a vacuum in the presence of various types of sources. The approximate values of the reflection and transmission coefficients are compared with the rigorous values listed in Part I. In this way, the high accuracy of the approximation is clearly demonstrated. (author)
Exact multiplicity results for quasilinear boundary-value problems with cubic-like nonlinearities
Directory of Open Access Journals (Sweden)
Idris Addou
2000-01-01
Full Text Available We consider the boundary-value problem $$displaylines{ -(varphi_p (u'' =lambda f(u mbox{ in }(0,1 cr u(0 = u(1 =0,, }$$ where $p>1$, $lambda >0$ and $varphi_p (x =| x|^{p-2}x$. The nonlinearity $f$ is cubic-like with three distinct roots 0=a less than b less than c. By means of a quadrature method, we provide the exact number of solutions for all $lambda >0$. This way we extend a recent result, for $p=2$, by Korman et al. cite{KormanLiOuyang} to the general case $p>1$. We shall prove that when 1less than $pleq 2$ the structure of the solution set is exactly the same as that studied in the case $p=2$ by Korman et al. cite{KormanLiOuyang}, and strictly different in the case $p>2$.
A hybrid multiple attribute decision making method for solving problems of industrial environment
Directory of Open Access Journals (Sweden)
Dinesh Singh
2011-01-01
Full Text Available The selection of appropriate alternative in the industrial environment is an important but, at the same time, a complex and difficult problem because of the availability of a wide range of alternatives and similarity among them. Therefore, there is a need for simple, systematic, and logical methods or mathematical tools to guide decision makers in considering a number of selection attributes and their interrelations. In this paper, a hybrid decision making method of graph theory and matrix approach (GTMA and analytical hierarchy process (AHP is proposed. Three examples are presented to illustrate the potential of the proposed GTMA-AHP method and the results are compared with the results obtained using other decision making methods.
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
The Problem of Multiple Criteria Selection of the Surface Mining Haul Trucks
Bodziony, Przemysław; Kasztelewicz, Zbigniew; Sawicki, Piotr
2016-06-01
Vehicle transport is a dominant type of technological processes in rock mines, and its profit ability is strictly dependent on overall cost of its exploitation, especially on diesel oil consumption. Thus, a rational design of transportation system based on haul trucks should result from thorough analysis of technical and economic issues, including both cost of purchase and its further exploitation, having a crucial impact on the cost of minerals extraction. Moreover, off-highway trucks should be selected with respect to all specific exploitation conditions and even the user's preferences and experience. In this paper a development of universal family of evaluation criteria as well as application of evaluation method for haul truck selection process for a specific exploitation conditions in surface mining have been carried out. The methodology presented in the paper is based on the principles of multiple criteria decision aiding (MCDA) using one of the ranking method, i.e. ELECTRE III. The applied methodology has been allowed for ranking of alternative solution (variants), on the considered set of haul trucks. The result of the research is a universal methodology, and it consequently may be applied in other surface mines with similar exploitation parametres.
Directory of Open Access Journals (Sweden)
McFarland Henry
2009-01-01
Full Text Available Magnetic Resonance Imaging (MRI has brought in several benefits to the study of Multiple Sclerosis (MS. It provides accurate measurement of disease activity, facilitates precise diagnosis, and aid in the assessment of newer therapies. The imaging guidelines for MS are broadly divided in to approaches for imaging patients with suspected MS or clinically isolated syndromes (CIS or for monitoring patients with established MS. In this review, the technical aspects of MR imaging for MS are briefly discussed. The imaging process need to capture the twin aspects of acute MS viz. the autoimmune acute inflammatory process and the neurodegenerative process. Gadolinium enhanced MRI can identify acute inflammatory lesions precisely. The commonly applied MRI marker of disease progression is brain atrophy. Whole brain magnetization Transfer Ratio (MTR and Magnetic Resonance Spectroscopy (MRS are two other techniques use to monitor disease progression. A variety of imaging techniques such as Double Inversion Recovery (DIR, Spoiled Gradient Recalled (SPGR acquisition, and Fluid Attenuated Inversion Recovery (FLAIR have been utilized to study the cortical changes in MS. MRI is now extensively used in the Phase I, II and III clinical trials of new therapies. As the technical aspects of MRI advance rapidly, and higher field strengths become available, it is hoped that the impact of MRI on our understanding of MS will be even more profound in the next decade.
Directory of Open Access Journals (Sweden)
Zhang Xuemei
2009-01-01
Full Text Available By constructing available upper and lower solutions and combining the Schauder's fixed point theorem with maximum principle, this paper establishes sufficient and necessary conditions to guarantee the existence of as well as positive solutions for a class of singular boundary value problems on time scales. The results significantly extend and improve many known results for both the continuous case and more general time scales. We illustrate our results by one example.
Variable choices of scaling in the homogenization of a Nernst-Planck-Poisson problem
Ray, N.; Eck, C.; Muntean, A.; Knabner, P.
2011-01-01
We perform the periodic homogenization (i. e. e ¿ 0) of the non-stationary Nernst-Planck-Poisson system using two-scale convergence, where e is a suitable scale parameter. The objective is to investigate the influence of variable choices of scaling in e of the microscopic system of partial
Zhai, Chengbo; Hao, Mengru
2014-01-01
By using Krasnoselskii's fixed point theorem, we study the existence of at least one or two positive solutions to a system of fractional boundary value problems given by -D(0+)(ν1)y1(t) = λ1a1(t)f(y1(t), y2(t)), - D(0+)(ν2)y2(t) = λ2a2(t)g(y1(t), y2(t)), where D(0+)(ν) is the standard Riemann-Liouville fractional derivative, ν1, ν2 ∈ (n - 1, n] for n > 3 and n ∈ N, subject to the boundary conditions y1((i))(0) = 0 = y ((i))(0), for 0 ≤ i ≤ n - 2, and [D(0+)(α)y1(t)] t=1 = 0 = [D(0+ (α)y2(t)] t=1, for 1 ≤ α ≤ n - 2, or y1((i))(0) = 0 = y ((i))(0), for 0 ≤ i ≤ n - 2, and [D(0+)(α)y1(t)] t=1 = ϕ1(y1), [D(0+)(α)y2(t)] t=1 = ϕ2(y2), for 1 ≤ α ≤ n - 2, ϕ1, ϕ2 ∈ C([0,1], R). Our results are new and complement previously known results. As an application, we also give an example to demonstrate our result.
Hadronic multiplicity and total cross-section: a new scaling in wide energy range
International Nuclear Information System (INIS)
Kobylinsky, N.A.; Martynov, E.S.; Shelest, V.P.
1983-01-01
The ratio of mean multiplicity to total cross-section is shown to be the same for all the Regge models and to rise with energy as lns which is confirmed by experimental data. Hence, a power of multiplicity growth is unambiguously connected with that of total cross-section. As regards the observed growth, approximately ln 2 s, it tells about a dipole character of pomeron singularity
Measures of spike train synchrony for data with multiple time scales
Satuvuori, Eero; Mulansky, Mario; Bozanic, Nebojsa; Malvestio, Irene; Zeldenrust, Fleur; Lenk, Kerstin; Kreuz, Thomas
2017-01-01
Background Measures of spike train synchrony are widely used in both experimental and computational neuroscience. Time-scale independent and parameter-free measures, such as the ISI-distance, the SPIKE-distance and SPIKE-synchronization, are preferable to time scale parametric measures, since by
Directory of Open Access Journals (Sweden)
Aliasghar Baziar
2015-03-01
Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.
Double inflation: A possible resolution of the large-scale structure problem
International Nuclear Information System (INIS)
Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.
1986-11-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs
Solving Large-Scale Computational Problems Using Insights from Statistical Physics
Energy Technology Data Exchange (ETDEWEB)
Selman, Bart [Cornell University
2012-02-29
Many challenging problems in computer science and related fields can be formulated as constraint satisfaction problems. Such problems consist of a set of discrete variables and a set of constraints between those variables, and represent a general class of so-called NP-complete problems. The goal is to find a value assignment to the variables that satisfies all constraints, generally requiring a search through and exponentially large space of variable-value assignments. Models for disordered systems, as studied in statistical physics, can provide important new insights into the nature of constraint satisfaction problems. Recently, work in this area has resulted in the discovery of a new method for solving such problems, called the survey propagation (SP) method. With SP, we can solve problems with millions of variables and constraints, an improvement of two orders of magnitude over previous methods.
Directory of Open Access Journals (Sweden)
Aline Braz de Lima
2015-03-01
Full Text Available This article describes some prevalent personality dimensions of recently diagnosed multiple sclerosis patients. A sample of 33 female recently diagnosed with relapsing-remitting multiple sclerosis (RRMS was assessed with the NEO-FFI personality scale. Beck depression (BDI and anxiety (BAI scales were also used. No significant levels of anxiety or depression were identified in this group. As for personality factors, conscientiousness was the most common factor found, whereas openness to experience was the least observed. Literature on the relationship between personality and MS is scarce and there are no Brazilian studies on this subject. Some personality traits might complicate or facilitate the experience of living with a chronic, disabling and uncertain neurological condition such as MS.
Energy Technology Data Exchange (ETDEWEB)
Bluet, J C [Commissariat a l' Energie Atomique, Cadarache (France)
1966-02-01
Three problems of multiple scattering arising from neutron cross sections experiments, are reported here. The common hypothesis are: - Elastic scattering is the only possible process - Angular distributions are isotropic - Losses of particle energy are negligible in successive collisions. In the three cases practical results, corresponding to actual experiments are given. Moreover the results are shown in more general way, using dimensionless variable such as the ratio of geometrical dimensions to neutron mean free path. The FORTRAN codes are given together with to the corresponding flow charts, and lexicons of symbols. First problem: Measurement of sodium capture cross-section. A sodium sample of given geometry is submitted to a neutron flux. Induced activity is then measured by means of a sodium iodide cristal. The distribution of active nuclei in the sample, and the counter efficiency are calculated by Monte-Carlo method taking multiple scattering into account. Second problem: absolute measurement of a neutron flux using a glass scintillator. The scintillator is a use of lithium 6 loaded glass, submitted to neutron flux perpendicular to its plane faces. If the glass thickness is not negligible compared with scattering mean free path {lambda}, the mean path e' of neutrons in the glass is different from the thickness. Monte-Carlo calculation are made to compute this path and a relative correction to efficiency equal to (e' - e)/e. Third problem: study of a neutron collimator. A neutron detector is placed at the bottom of a cylinder surrounded with water. A neutron source is placed on the cylinder axis, in front of the water shield. The number of neutron tracks going directly and indirectly through the water from the source to the detector are counted. (author) [French] On traite dans ce rapport de trois problemes avec les hypotheses communes suivantes: 1.- Le seul processus de collision possible est la diffusion electrique. 2.- La distribution angulaire est
Velten, Andreas
2017-05-01
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Directory of Open Access Journals (Sweden)
Knežević Tatjana
2017-01-01
Full Text Available Introduction/Objective. Patient-reported outcomes have been recognized as an important way of assessing health and well-being of patients with multiple sclerosis (MS. The aim of the study is to determine the correlation between different subscales of Patient-Reported Impact of Spasticity Measure (PRISM and Multiple Sclerosis Spasticity Scale (MSSS-88 scales in the estimation of spasticity influence on different domains Methods. The study is a cross-sectional observational study. MSSS-88 and PRISM scales were analyzed in five domains (body-function domain, activity domain, participation domain, personal factors/wellbeing domain, and hypothesis. For statistical interpretation of the correlation we performed the Spearman’s ρ-test, concurrent validity, divergent validity, and the linear regression model. Results. We found a significant correlation between subscales of evaluated MSSS-88 and PRISM scales for body domains; the highest correlation was between the need for assistance/positioning (NA/P and walking (W. Spasticity has the weakest correlation with the need for intervention (NI. The presence of pain has a negative impact and significant positive correlation between pain discomfort and NI. In the domain of body function for males, there was a non-significant correlation between muscle spasms and NI. The same applies for social functioning and social embarrassment domains, as well as for emotional health and psychological agitation for personal factors / wellbeing domain. The differences between genders of MS patients persist in different domains; muscle spasms are strong predictors for NI, and body movement is a strong predictor versus W for NA/P. Conclusion. MSSS-88 and PRISM scales can be considered reliable in measuring different domains of disability for MS patients with spasticity. Because it is shorter, quicker, and simple to use, it is concluded that the PRISM scale can successfully compete with and replace the MSSS-88 scale in
Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J.; Roberts, Dar A.
2017-01-01
This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem
Heyne, D. A.; Vreeke, L. J.; Maric, M.; Boelens, H.; Van Widenfelt, B. M.
2017-01-01
The School Refusal Assessment Scale (SRAS) was developed to identify four factors that might maintain a youth’s school attendance problem (SAP), and thus be targeted for treatment. There is still limited support for the four-factor model inherent to the SRAS and its revision (SRAS-R). Recent studies
Rongo, L.M.B.; Barten, F.J.M.H.; Msamanga, G.I.; Heederik, D.; Dolmans, W.M.V.
2004-01-01
BACKGROUND: Workers in informal small-scale industries (SSI) in developing countries involved in welding, spray painting, woodwork and metalwork are exposed to various hazards with consequent risk to health. Aim To assess occupational exposure and health problems in SSI in Dar es Salaam, Tanzania.
Multiple Positive Symmetric Solutions to p-Laplacian Dynamic Equations on Time Scales
Directory of Open Access Journals (Sweden)
You-Hui Su
2009-01-01
two examples are given to illustrate the main results and their differences. These results are even new for the special cases of continuous and discrete equations, as well as in the general time-scale setting.
The multiple time scales of sleep dynamics as a challenge for modelling the sleeping brain.
Olbrich, Eckehard; Claussen, Jens Christian; Achermann, Peter
2011-10-13
A particular property of the sleeping brain is that it exhibits dynamics on very different time scales ranging from the typical sleep oscillations such as sleep spindles and slow waves that can be observed in electroencephalogram (EEG) segments of several seconds duration over the transitions between the different sleep stages on a time scale of minutes to the dynamical processes involved in sleep regulation with typical time constants in the range of hours. There is an increasing body of work on mathematical and computational models addressing these different dynamics, however, usually considering only processes on a single time scale. In this paper, we review and present a new analysis of the dynamics of human sleep EEG at the different time scales and relate the findings to recent modelling efforts pointing out both the achievements and remaining challenges.
Investigations of grain size dependent sediment transport phenomena on multiple scales
Thaxton, Christopher S.
Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for
Cecala, Kristen K.; Maerz, John C.; Halstead, Brian J.; Frisch, John R.; Gragson, Ted L.; Hepinstall-Cymerman, Jeffrey; Leigh, David S.; Jackson, C. Rhett; Peterson, James T.; Pringle, Catherine M.
2018-01-01
Understanding how factors that vary in spatial scale relate to population abundance is vital to forecasting species responses to environmental change. Stream and river ecosystems are inherently hierarchical, potentially resulting in organismal responses to fine‐scale changes in patch characteristics that are conditional on the watershed context. Here, we address how populations of two salamander species are affected by interactions among hierarchical processes operating at different scales within a rapidly changing landscape of the southern Appalachian Mountains. We modeled reach‐level occupancy of larval and adult black‐bellied salamanders (Desmognathus quadramaculatus) and larval Blue Ridge two‐lined salamanders (Eurycea wilderae) as a function of 17 different terrestrial and aquatic predictor variables that varied in spatial extent. We found that salamander occurrence varied widely among streams within fully forested catchments, but also exhibited species‐specific responses to changes in local conditions. While D. quadramaculatus declined predictably in relation to losses in forest cover, larval occupancy exhibited the strongest negative response to forest loss as well as decreases in elevation. Conversely, occupancy of E. wilderae was unassociated with watershed conditions, only responding negatively to higher proportions of fast‐flowing stream habitat types. Evaluation of hierarchical relationships demonstrated that most fine‐scale variables were closely correlated with broad watershed‐scale variables, suggesting that local reach‐scale factors have relatively smaller effects within the context of the larger landscape. Our results imply that effective management of southern Appalachian stream salamanders must first focus on the larger scale condition of watersheds before management of local‐scale conditions should proceed. Our findings confirm the results of some studies while refuting the results of others, which may indicate that
Directory of Open Access Journals (Sweden)
Yingni Zhai
2014-10-01
Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the
DEFF Research Database (Denmark)
Quaglia, Alberto; Sarup, Bent; Sin, Gürkan
2013-01-01
structure for efficient formulation of enterprise-wide optimization problems is presented. Through the integration of the described data structure in our synthesis and design framework, the problem formulation workflow is automated in a software tool, reducing time and resources needed to formulate large......The formulation of Enterprise-Wide Optimization (EWO) problems as mixed integer nonlinear programming requires collecting, consolidating and systematizing large amount of data, coming from different sources and specific to different disciplines. In this manuscript, a generic and flexible data...... problems, while ensuring at the same time data consistency and quality at the application stage....
Directory of Open Access Journals (Sweden)
Bruna E. M. Marangoni
2012-12-01
Full Text Available Gait impairment is reported by 85% of patients with multiple sclerosis (MS as main complaint. In 2003, Hobart et al. developed a scale for walking known as The 12-item Multiple Sclerosis Walking Scale (MSWS-12, which combines the perspectives of patients with psychometric methods. OBJECTIVE: This study aimed to cross-culturally adapt and validate the MSWS-12 for the Brazilian population with MS. METHODS: This study included 116 individuals diagnosed with MS, in accordance with McDonald's criteria. The steps of the adaptation process included translation, back-translation, review by an expert committee and pretesting. A test and retest of MSWS-12/BR was made for validation, with comparison with another scale (MSIS-29/BR and another test (T25FW. RESULTS: The Brazilian version of MSWS-12/BR was shown to be similar to the original. The results indicate that MSWS-12/BR is a reliable and reproducible scale. CONCLUSIONS: MSWS-12/BR has been adapted and validated, and it is a reliable tool for the Brazilian population.
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
Torney, Colin J; Hopcraft, J Grant C; Morrison, Thomas A; Couzin, Iain D; Levin, Simon A
2018-05-19
A central question in ecology is how to link processes that occur over different scales. The daily interactions of individual organisms ultimately determine community dynamics, population fluctuations and the functioning of entire ecosystems. Observations of these multiscale ecological processes are constrained by various technological, biological or logistical issues, and there are often vast discrepancies between the scale at which observation is possible and the scale of the question of interest. Animal movement is characterized by processes that act over multiple spatial and temporal scales. Second-by-second decisions accumulate to produce annual movement patterns. Individuals influence, and are influenced by, collective movement decisions, which then govern the spatial distribution of populations and the connectivity of meta-populations. While the field of movement ecology is experiencing unprecedented growth in the availability of movement data, there remain challenges in integrating observations with questions of ecological interest. In this article, we present the major challenges of addressing these issues within the context of the Serengeti wildebeest migration, a keystone ecological phenomena that crosses multiple scales of space, time and biological complexity.This article is part of the theme issue 'Collective movement ecology'. © 2018 The Author(s).
Evidence of self-affine multiplicity scaling of charged-particle ...
Indian Academy of Sciences (India)
In the past few years many workers reported on large density fluctuations in different interacting systems [6–12]. Several theoretical interpretations of the origin of large .... of effects with this parameter, already observed for the case of shower multiplicity. ... properties may be different for different regions of the system.
Qualification of new design of flexible pipe against singing: testing at multiple scales
Golliard, J.; Lunde, K.; Vijlbrief, O.
2016-01-01
Flexible pipes for production of oil and gas typically present a corrugated inner surface. This has been identified as the cause of "singing risers": Flow-Induced Pulsations due to the interaction of sound waves with the shear layers at the small cavities present at each of the multiple
Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms
Quintin, Jean-Noel
2013-10-01
Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.
Novakovic, A.M.; Krekels, E.H.; Munafo, A.; Ueckert, S.; Karlsson, M.O.
2016-01-01
In this study, we report the development of the first item response theory (IRT) model within a pharmacometrics framework to characterize the disease progression in multiple sclerosis (MS), as measured by Expanded Disability Status Score (EDSS). Data were collected quarterly from a 96-week phase III
Feasibility of large-scale deployment of multiple wearable sensors in Parkinson's disease
Silva de Lima, A.L.; Hahn, T.; Evers, L.J.W.; Vries, N.M. de; Cohen, E.; Afek, M.; Bataille, L.; Daeschler, M.; Claes, K.; Boroojerdi, B.; Terricabras, D.; Little, M.A.; Baldus, H.; Bloem, B.R.; Faber, M.J.
2017-01-01
Wearable devices can capture objective day-to-day data about Parkinson's Disease (PD). This study aims to assess the feasibility of implementing wearable technology to collect data from multiple sensors during the daily lives of PD patients. The Parkinson@home study is an observational, two-cohort
Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms
Quintin, Jean-Noel; Hasanov, Khalid; Lastovetsky, Alexey
2013-01-01
Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon's algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon's algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.
Study of fission time scale from measurement of pre-scission light particle and γ-ray multiplicities
International Nuclear Information System (INIS)
Ramachandran, K.; Chatterjee, A.; Navin, A.
2014-01-01
This work presents the result of a simultaneous measurement of pre-scission multiplicities and analysis using the statistical model code JOANNE2 which includes deformation effects. Evaporation residue cross-sections has also been measured for the same system and analyzed in a consistent manner. The neutron, charged particle, GDR γ-ray and ER data could be explained consistently. The emission of neutrons seems to be favored towards larger deformation as compared to charged particles. The pre-scission time scale is deduced as 0-2 x 10 -21 s whereas the saddle-to-scission time scale is 36-39 x 10 -21 s. The total fission time scale is deduced as 36-41 x 10 -21 s
Monroe, Alison
2015-12-01
Observing populations at different spatial scales gives greater insight into the specific processes driving genetic differentiation and population structure. Here we determined population connectivity across multiple spatial scales in the Red Sea to determine the population structures of two reef building corals Stylophora pistillata and Pocillopora verrucosa. The Red sea is a 2,250 km long body of water with extremely variable latitudinal environmental gradients. Mitochondrial and microsatellite markers were used to determine distinct lineages and to look for genetic differentiation among sampling sites. No distinctive population structure across the latitudinal gradient was discovered within this study suggesting a phenotypic plasticity of both these species to various environments. Stylophora pistillata displayed a heterogeneous distribution of three distinct genetic populations on both a fine and large scale. Fst, Gst, and Dest were all significant (p-value<0.05) and showed moderate genetic differentiation between all sampling sites. However this seems to be byproduct of the heterogeneous distribution, as no distinct genetic population breaks were found. Stylophora pistillata showed greater population structure on a fine scale suggesting genetic selection based on fine scale environmental variations. However, further environmental and oceanographic data is needed to make more inferences on this structure at small spatial scales. This study highlights the deficits of knowledge of both the Red Sea and coral plasticity in regards to local environmental conditions.
Directory of Open Access Journals (Sweden)
Tennant Alan
2010-02-01
Full Text Available Abstract Background Fatigue is a common and debilitating symptom in multiple sclerosis (MS. Best-practice guidelines suggest that health services should repeatedly assess fatigue in persons with MS. Several fatigue scales are available but concern has been expressed about their validity. The objective of this study was to examine the reliability and validity of a new scale for MS fatigue, the Neurological Fatigue Index (NFI-MS. Methods Qualitative analysis of 40 MS patient interviews had previously contributed to a coherent definition of fatigue, and a potential 52 item set representing the salient themes. A draft questionnaire was mailed out to 1223 people with MS, and the resulting data subjected to both factor and Rasch analysis. Results Data from 635 (51.9% response respondents were split randomly into an 'evaluation' and 'validation' sample. Exploratory factor analysis identified four potential subscales: 'physical', 'cognitive', 'relief by diurnal sleep or rest' and 'abnormal nocturnal sleep and sleepiness'. Rasch analysis led to further item reduction and the generation of a Summary scale comprising items from the Physical and Cognitive subscales. The scales were shown to fit Rasch model expectations, across both the evaluation and validation samples. Conclusion A simple 10-item Summary scale, together with scales measuring the physical and cognitive components of fatigue, were validated for MS fatigue.
Directory of Open Access Journals (Sweden)
Tang Xiaofeng
2014-01-01
Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.
Petersen, Isaac T.; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E.; Pettit, Gregory S.; Lansford, Jennifer E.; Dodge, Kenneth A.
2018-01-01
Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid…
Enhancement of a model for Large-scale Airline Network Planning Problems
Kölker, K.; Lopes dos Santos, Bruno F.; Lütjens, K.
2016-01-01
The main focus of this study is to solve the network planning problem based on passenger decision criteria including the preferred departure time and travel time for a real-sized airline network. For this purpose, a model of the integrated network planning problem is formulated including scheduling
Goverover, Y; Sandroff, B M; DeLuca, J
2018-04-01
To (1) examine and compare dual-task performance in patients with multiple sclerosis (MS) and healthy controls (HCs) using mathematical problem-solving questions that included an everyday competence component while performing an upper extremity fine motor task; and (2) examine whether difficulties in dual-task performance are associated with problems in performing an everyday internet task. Pilot study, mixed-design with both a within and between subjects' factor. A nonprofit rehabilitation research institution and the community. Participants (N=38) included persons with MS (n=19) and HCs (n=19) who were recruited from a nonprofit rehabilitation research institution and from the community. Not applicable. Participant were presented with 2 testing conditions: (1) solving mathematical everyday problems or placing bolts into divots (single-task condition); and (2) solving problems while putting bolts into divots (dual-task condition). Additionally, participants were required to perform a test of everyday internet competence. As expected, dual-task performance was significantly worse than either of the single-task tasks (ie, number of bolts into divots or correct answers, and time to answer the questions). Cognitive but not motor dual-task cost was associated with worse performance in activities of everyday internet tasks. Cognitive dual-task cost is significantly associated with worse performance of everyday technology. This was not observed in the motor dual-task cost. The implications of dual-task costs on everyday activity are discussed. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Age-related changes in the plasticity and toughness of human cortical bone at multiple length-scales
Energy Technology Data Exchange (ETDEWEB)
Zimmermann, Elizabeth A.; Schaible, Eric; Bale, Hrishikesh; Barth, Holly D.; Tang, Simon Y.; Reichert, Peter; Busse, Bjoern; Alliston, Tamara; Ager III, Joel W.; Ritchie, Robert O.
2011-08-10
The structure of human cortical bone evolves over multiple length-scales from its basic constituents of collagen and hydroxyapatite at the nanoscale to osteonal structures at nearmillimeter dimensions, which all provide the basis for its mechanical properties. To resist fracture, bone’s toughness is derived intrinsically through plasticity (e.g., fibrillar sliding) at structural-scales typically below a micron and extrinsically (i.e., during crack growth) through mechanisms (e.g., crack deflection/bridging) generated at larger structural-scales. Biological factors such as aging lead to a markedly increased fracture risk, which is often associated with an age-related loss in bone mass (bone quantity). However, we find that age-related structural changes can significantly degrade the fracture resistance (bone quality) over multiple lengthscales. Using in situ small-/wide-angle x-ray scattering/diffraction to characterize sub-micron structural changes and synchrotron x-ray computed tomography and in situ fracture-toughness measurements in the scanning electron microscope to characterize effects at micron-scales, we show how these age-related structural changes at differing size-scales degrade both the intrinsic and extrinsic toughness of bone. Specifically, we attribute the loss in toughness to increased non-enzymatic collagen cross-linking which suppresses plasticity at nanoscale dimensions and to an increased osteonal density which limits the potency of crack-bridging mechanisms at micron-scales. The link between these processes is that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of microcracking.