WorldWideScience

Sample records for optimal temporal scale

  1. Optimal plant water use across temporal scales: bridging eco-hydrological theories and plant eco-physiological responses

    Manzoni, S.; Vico, G.; Palmroth, S.; Katul, G. G.; Porporato, A. M.

    2013-12-01

    In terrestrial ecosystems, plant photosynthesis occurs at the expense of water losses through stomata, thus creating an inherent hydrologic constrain to carbon (C) gains and productivity. While such a constraint cannot be overcome, evolution has led to a number of adaptations that allow plants to thrive under highly variable and often limiting water availability. It may be hypothesized that these adaptations are optimal and allow maximum C gain for a given water availability. A corollary hypothesis is that these adaptations manifest themselves as coordination between the leaf photosynthetic machinery and the plant hydraulic system. This coordination leads to functional relations between the mean hydrologic state, plant hydraulic traits, and photosynthetic parameters that can be used as bridge across temporal scales. Here, optimality theories describing the behavior of stomata and plant morphological features in a fluctuating soil moisture environment are proposed. The overarching goal is to explain observed global patterns of plant water use and their ecological and biogeochemical consequences. The problem is initially framed as an optimal control problem of stomatal closure during drought of a given duration, where maximizing the total photosynthesis under limited and diminishing water availability is the objective function. Analytical solutions show that commonly used transpiration models (in which stomatal conductance is assumed to depend on soil moisture) are particular solutions emerging from the optimal control problem. Relations between stomatal conductance, vapor pressure deficit, and atmospheric CO2 are also obtained without any a priori assumptions under this framework. Second, the temporal scales of the model are expanded by explicitly considering the stochasticity of rainfall. In this context, the optimal control problem becomes a maximization problem for the mean photosynthetic rate. Results show that to achieve maximum C gains under these

  2. Optimizing Temporal Queries

    Toman, David; Bowman, Ivan Thomas

    2003-01-01

    Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often, t...

  3. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  4. Temporal scaling in information propagation

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  5. Optimization of temporal networks under uncertainty

    Wiesemann, Wolfram

    2012-01-01

    Many decision problems in Operations Research are defined on temporal networks, that is, workflows of time-consuming tasks whose processing order is constrained by precedence relations. For example, temporal networks are used to model projects, computer applications, digital circuits and production processes. Optimization problems arise in temporal networks when a decision maker wishes to determine a temporal arrangement of the tasks and/or a resource assignment that optimizes some network characteristic (e.g. the time required to complete all tasks). The parameters of these optimization probl

  6. Identifying Optimal Temporal Scale for the Correlation of AOD and Ground Measurements of PM2.5 to Improve the Model Performance in a Real-time Air Quality Estimation System

    Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh

    2009-01-01

    Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.

  7. Tractable Pareto Optimization of Temporal Preferences

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  8. Optimal Temporal Policies in Fluid Milk Advertising

    Vande Kamp, Philip R.; Kaiser, Harry M.

    1998-01-01

    This study develops an approach to obtain optimal temporal advertising strategies when consumers' response to advertising is asymmetric. Using this approach, optimal strategies for generic fluid milk advertising in New York City are determined. Results indicate that pulsed advertising policies are significantly more effective in increasing demand than a uniform advertising policy. Sensitivity analyses show that the optimal advertising policies are insensitive to reasonable variations in inter...

  9. Optimizing Temporal Queries: Efficient Handling of Duplicates

    Toman, David; Bowman, Ivan Thomas

    2001-01-01

    , these query languages are implemented by translating temporal queries into standard relational queries. However, the compiled queries are often quite cumbersome and expensive to execute even using state-of-the- art relational products. This paper presents an optimization technique that produces more efficient...... translated SQL queries by taking into account the properties of the encoding used for temporal attributes. For concreteness, this translation technique is presented in the context of SQL/TP; however, these techniques are also applicable to other temporal query languages....

  10. Temporal Optimization Planning for Fleet Repositioning

    Tierney, Kevin; Jensen, Rune Møller

    2011-01-01

    Fleet repositioning problems pose a high financial bur- den on shipping firms, but have received little attention in the literature, despite their high importance to the shipping industry. Fleet repositioning problems are characterized by chains of interacting activities, but state-of-the-art pla......Fleet repositioning problems pose a high financial bur- den on shipping firms, but have received little attention in the literature, despite their high importance to the shipping industry. Fleet repositioning problems are characterized by chains of interacting activities, but state......-of-the-art planning and scheduling techniques do not offer cost models that are rich enough to represent essential objectives of these problems. To this end, we introduce a novel framework called Temporal Optimization Planning (TOP). TOP uses partial order planning to build optimization models associated...

  11. The temporal scaling of Caenorhabditis elegans ageing.

    Stroustrup, Nicholas; Anthony, Winston E; Nash, Zachary M; Gowda, Vivek; Gomez, Adam; López-Moyado, Isaac F; Apfeld, Javier; Fontana, Walter

    2016-02-04

    The process of ageing makes death increasingly likely, involving a random aspect that produces a wide distribution of lifespan even in homogeneous populations. The study of this stochastic behaviour may link molecular mechanisms to the ageing process that determines lifespan. Here, by collecting high-precision mortality statistics from large populations, we observe that interventions as diverse as changes in diet, temperature, exposure to oxidative stress, and disruption of genes including the heat shock factor hsf-1, the hypoxia-inducible factor hif-1, and the insulin/IGF-1 pathway components daf-2, age-1, and daf-16 all alter lifespan distributions by an apparent stretching or shrinking of time. To produce such temporal scaling, each intervention must alter to the same extent throughout adult life all physiological determinants of the risk of death. Organismic ageing in Caenorhabditis elegans therefore appears to involve aspects of physiology that respond in concert to a diverse set of interventions. In this way, temporal scaling identifies a novel state variable, r(t), that governs the risk of death and whose average decay dynamics involves a single effective rate constant of ageing, kr. Interventions that produce temporal scaling influence lifespan exclusively by altering kr. Such interventions, when applied transiently even in early adulthood, temporarily alter kr with an attendant transient increase or decrease in the rate of change in r and a permanent effect on remaining lifespan. The existence of an organismal ageing dynamics that is invariant across genetic and environmental contexts provides the basis for a new, quantitative framework for evaluating the manner and extent to which specific molecular processes contribute to the aspect of ageing that determines lifespan.

  12. The temporal scaling of Caenorhabditis elegans ageing

    Stroustrup, Nicholas; Anthony, Winston E.; Nash, Zachary M.; Gowda, Vivek; Gomez, Adam; López-Moyado, Isaac F.; Apfeld, Javier; Fontana, Walter

    2016-02-01

    The process of ageing makes death increasingly likely, involving a random aspect that produces a wide distribution of lifespan even in homogeneous populations. The study of this stochastic behaviour may link molecular mechanisms to the ageing process that determines lifespan. Here, by collecting high-precision mortality statistics from large populations, we observe that interventions as diverse as changes in diet, temperature, exposure to oxidative stress, and disruption of genes including the heat shock factor hsf-1, the hypoxia-inducible factor hif-1, and the insulin/IGF-1 pathway components daf-2, age-1, and daf-16 all alter lifespan distributions by an apparent stretching or shrinking of time. To produce such temporal scaling, each intervention must alter to the same extent throughout adult life all physiological determinants of the risk of death. Organismic ageing in Caenorhabditis elegans therefore appears to involve aspects of physiology that respond in concert to a diverse set of interventions. In this way, temporal scaling identifies a novel state variable, r(t), that governs the risk of death and whose average decay dynamics involves a single effective rate constant of ageing, kr. Interventions that produce temporal scaling influence lifespan exclusively by altering kr. Such interventions, when applied transiently even in early adulthood, temporarily alter kr with an attendant transient increase or decrease in the rate of change in r and a permanent effect on remaining lifespan. The existence of an organismal ageing dynamics that is invariant across genetic and environmental contexts provides the basis for a new, quantitative framework for evaluating the manner and extent to which specific molecular processes contribute to the aspect of ageing that determines lifespan.

  13. Conference on Large Scale Optimization

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  14. Spatio-temporal scaling of channels in braided streams.

    A.G. Hunt; G.E. Grant; V.K. Gupta

    2006-01-01

    The spatio-temporal scaling relationship for individual channels in braided streams is shown to be identical to the spatio-temporal scaling associated with constant Froude number, e.g., Fr = l. A means to derive this relationship is developed from a new theory of sediment transport. The mechanism by which the Fr = l condition apparently governs the scaling seems to...

  15. Optimal renormalization scales and commensurate scale relations

    Brodsky, S.J.; Lu, H.J.

    1996-01-01

    Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e - annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e - annihilation and the heavy quark coupling α V which is measurable in lattice gauge theory

  16. Allometric and temporal scaling of movement characteristics in Galapagos tortoises

    Bastille-Rousseau, Guillaume; Yackulic, Charles B.; Frair, Jacqueline L.; Cabrera, Freddy; Blake, Stephen

    2016-01-01

    Understanding how individual movement scales with body size is of fundamental importance in predicting ecological relationships for diverse species. One-dimensional movement metrics scale consistently with body size yet vary over different temporal scales. Knowing how temporal scale influences the relationship between animal body size and movement would better inform hypotheses about the efficiency of foraging behaviour, the ontogeny of energy budgets, and numerous life-history trade-offs.We investigated how the temporal scaling of allometric patterns in movement varies over the course of a year, specifically during periods of motivated (directional and fast movement) and unmotivated (stationary and tortuous movement) behaviour. We focused on a recently diverged group of species that displays wide variation in movement behaviour – giant Galapagos tortoises (Chelonoidis spp.) – to test how movement metrics estimated on a monthly basis scaled with body size.We used state-space modelling to estimate seven different movement metrics of Galapagos tortoises. We used log-log regression of the power law to evaluate allometric scaling for these movement metrics and contrasted relationships by species and sex.Allometric scaling of movement was more apparent during motivated periods of movement. During this period, allometry was revealed at multiple temporal intervals (hourly, daily and monthly), with values observed at daily and monthly intervals corresponding most closely to the expected one-fourth scaling coefficient, albeit with wide credible intervals. We further detected differences in the magnitude of scaling among taxa uncoupled from observed differences in the temporal structuring of their movement rates.Our results indicate that the definition of temporal scales is fundamental to the detection of allometry of movement and should be given more attention in movement studies. Our approach not only provides new conceptual insights into temporal attributes in one

  17. Scale-Independent Biomechanical Optimization

    Schutte, J. F; Koh, B; Reinbolt, J. A; Haftka, R. T; George, A; Fregly, B. J

    2003-01-01

    ...: the Particle Swarm Optimizer (PSO). They apply this method to the biomechanical system identification problem of finding positions and orientations of joint axes in body segments through the processing of experimental movement data...

  18. Temporal fluctuation scaling in populations and communities

    Michael Kalyuzhny; Yishai Schreiber; Rachel Chocron; Curtis H. Flather; David A. Kessler; Nadav M. Shnerb

    2014-01-01

    Taylor's law, one of the most widely accepted generalizations in ecology, states that the variance of a population abundance time series scales as a power law of its mean. Here we reexamine this law and the empirical evidence presented in support of it. Specifically, we show that the exponent generally depends on the length of the time series, and its value...

  19. Optimal Quantum Spatial Search on Random Temporal Networks.

    Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser

    2017-12-01

    To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G(n,p), where p is the probability that any two given nodes are connected: After every time interval τ, a new graph G(n,p) replaces the previous one. We prove analytically that, for any given p, there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O(sqrt[n]), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.

  20. Optimal Quantum Spatial Search on Random Temporal Networks

    Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser

    2017-12-01

    To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.

  1. Scaling Sparse Matrices for Optimization Algorithms

    Gajulapalli Ravindra S; Lasdon Leon S

    2006-01-01

    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...

  2. Musical Scales in Tone Sequences Improve Temporal Accuracy.

    Li, Min S; Di Luca, Massimiliano

    2018-01-01

    Predicting the time of stimulus onset is a key component in perception. Previous investigations of perceived timing have focused on the effect of stimulus properties such as rhythm and temporal irregularity, but the influence of non-temporal properties and their role in predicting stimulus timing has not been exhaustively considered. The present study aims to understand how a non-temporal pattern in a sequence of regularly timed stimuli could improve or bias the detection of temporal deviations. We presented interspersed sequences of 3, 4, 5, and 6 auditory tones where only the timing of the last stimulus could slightly deviate from isochrony. Participants reported whether the last tone was 'earlier' or 'later' relative to the expected regular timing. In two conditions, the tones composing the sequence were either organized into musical scales or they were random tones. In one experiment, all sequences ended with the same tone; in the other experiment, each sequence ended with a different tone. Results indicate higher discriminability of anisochrony with musical scales and with longer sequences, irrespective of the knowledge of the final tone. Such an outcome suggests that the predictability of non-temporal properties, as enabled by the musical scale pattern, can be a factor in determining the sensitivity of time judgments.

  3. Spatio-temporal optimal law enforcement using Stackelberg games

    Naja, R.; Mouawad, N.; Ghandour, A.

    2017-01-01

    Every year, road accidents claim the lives of around 1.2 million worldwide (USDOT-NHTSA,2012). Deploying speed traps helps bounding vehicles speed and reducing collisions. Nevertheless, deterministic speed traps deployment in both spatial and temporal domains, allow drivers to learn and anticipate covered areas. In thispaper, we present a novel framework that provides randomized speed traps deployment schedule. It uses game theory in order to model drivers and law enforcers behavior. In this context, Stackelberg security game is used to derive best strategies to deploy. The game optimal solution maximizes law enforcer utility. This research work aims to optimize the deployment of speed traps on Lebanese highways according to the accidents probability input data. This work complements the near real time accident map provided by the Lebanese National Council for Scientific Research and designs an optimal speed trap map targeting Lebanese highways.(author)

  4. Temporal Variation of Large Scale Flows in the Solar Interior ...

    tribpo

    Temporal Variation of Large Scale Flows in the Solar Interior. 355. Figure 2. Zonal and meridional components of the time-dependent residual velocity at a few selected depths as marked above each panel, are plotted as contours of constant velocity in the longitude-latitude plane. The left panels show the zonal component, ...

  5. Optimized temporal pattern of brain stimulation designed by computational evolution.

    Brocker, David T; Swan, Brandon D; So, Rosa Q; Turner, Dennis A; Gross, Robert E; Grill, Warren M

    2017-01-04

    Brain stimulation is a promising therapy for several neurological disorders, including Parkinson's disease. Stimulation parameters are selected empirically and are limited to the frequency and intensity of stimulation. We varied the temporal pattern of deep brain stimulation to ameliorate symptoms in a parkinsonian animal model and in humans with Parkinson's disease. We used model-based computational evolution to optimize the stimulation pattern. The optimized pattern produced symptom relief comparable to that from standard high-frequency stimulation (a constant rate of 130 or 185 Hz) and outperformed frequency-matched standard stimulation in a parkinsonian rat model and in patients. Both optimized and standard high-frequency stimulation suppressed abnormal oscillatory activity in the basal ganglia of rats and humans. The results illustrate the utility of model-based computational evolution of temporal patterns to increase the efficiency of brain stimulation in treating Parkinson's disease and thereby reduce the energy required for successful treatment below that of current brain stimulation paradigms. Copyright © 2017, American Association for the Advancement of Science.

  6. Temperature Scaling Law for Quantum Annealing Optimizers.

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  7. A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment

    Eric J. Nava

    2012-03-01

    This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.

  8. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  9. A model for optimizing file access patterns using spatio-temporal parallelism

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  10. Temporal scaling of groundwater level fluctuations near a stream

    Schilling, K.E.; Zhang, Y.-K.

    2012-01-01

    Temporal scaling in stream discharge and hydraulic heads in riparian wells was evaluated to determine the feasibility of using spectral analysis to identify potential surface and groundwater interaction. In floodplains where groundwater levels respond rapidly to precipitation recharge, potential interaction is established if the hydraulic head (h) spectrum of riparian groundwater has a power spectral density similar to stream discharge (Q), exhibiting a characteristic breakpoint between high and low frequencies. At a field site in Walnut Creek watershed in central Iowa, spectral analysis of h in wells located 1 m from the channel edge showed a breakpoint in scaling very similar to the spectrum of Q (~20 h), whereas h in wells located 20 and 40 m from the channel showed temporal scaling from 1 to 10,000 h without a well-defined breakpoint. The spectral exponent (??) in the riparian zone decreased systematically from the channel into the floodplain as groundwater levels were increasingly dominated by white noise groundwater recharge. The scaling pattern of hydraulic head was not affected by land cover type, although the number of analyses was limited and site conditions were variable among sites. Spectral analysis would not replace quantitative tracer or modeling studies, but the method may provide a simple means of confirming potential interaction at some sites. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  11. Methods for Large-Scale Nonlinear Optimization.

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  12. Temporal and spatial scaling impacts on extreme precipitation

    Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.

    2015-01-01

    Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.

  13. Scaling Optimization of the SIESTA MHD Code

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  14. In-situ materials characterization across spatial and temporal scales

    Graafsma, Heinz; Zhang, Xiao; Frenken, Joost

    2014-01-01

    The behavior of nanoscale materials can change rapidly with time either because the environment changes rapidly, or because the influence of the environment propagates quickly across the intrinsically small dimensions of nanoscale materials. Extremely fast time resolution studies using X-rays, electrons and neutrons are of very high interest to many researchers and is a fast-evolving and interesting field for the study of dynamic processes. Therefore, in situ structural characterization and measurements of structure-property relationships covering several decades of length and time scales (from atoms to millimeters and femtoseconds to hours) with high spatial and temporal resolutions are crucially important to understand the synthesis and behavior of multidimensional materials. The techniques described in this book will permit access to the real-time dynamics of materials, surface processes, and chemical and biological reactions at various time scales. This book provides an interdisciplinary reference for res...

  15. Assessment of the optimal temporal window for intravenous CT cholangiography

    Schindera, Sebastian T.; Nelson, Rendon C.; Paulson, Erik K.; DeLong, David M.; Merkle, Elmar M. [Duke University Medical Center, Department of Radiology, P.O. Box 3808, Durham, NC (United States)

    2007-10-15

    The optimal temporal window of intravenous (IV) computed tomography (CT) cholangiography was prospectively determined. Fifteen volunteers (eight women, seven men; mean age, 38 years) underwent dynamic CT cholangiography. Two unenhanced images were acquired at the porta hepatis. Starting 5 min after initiation of IV contrast infusion (20 ml iodipamide meglumine 52%), 15 pairs of images at 5-min intervals were obtained. Attenuation of the extrahepatic bile duct (EBD) and the liver parenchyma was measured. Two readers graded visualization of the higher-order biliary branches. The first biliary opacification in the EBD occurred between 15 and 25 min (mean, 22.3 min {+-} 3.2) after initiation of the contrast agent. Biliary attenuation plateaued between the 35- and the 75-min time points. Maximum hepatic parenchymal enhancement was 18.5 HU {+-} 2.7. Twelve subjects demonstrated poor or non-visualization of higher-order biliary branches; three showed good or excellent visualization. Body weight and both biliary attenuation and visualization of the higher-order biliary branches correlated significantly (P<0.05). For peak enhancement of the biliary tree, CT cholangiography should be performed no earlier than 35 min after initiation of IV infusion. For a fixed contrast dose, superior visualization of the biliary system is achieved in subjects with lower body weight. (orig.)

  16. An Optimal Parametrization of Turbulent Scales

    Thalabard, S.

    2015-12-01

    To numerically capture the large-scale dynamics of atmospheric flows, geophysicists need to rely on reasonable parametrizations of the energy transfers to and from the non-resolved small scale eddies, mediated through turbulence. The task is notoriously not trivial, and is typically solved by ingenious but ad-hoc elaborations on the concept of eddy viscosities. The difficulty is tied into the intrinsic Non-Gaussianity of turbulence, a feature that may explain why standard Quasi-Normal cumulant discard statistical closure strategies can fail dramatically, an example being the development of negative energy spectra in Millionshtchikov's 1941 Quasi-Normal (QN) theory. While Orszag's 1977 Eddy Damped Quasi Normal Markovian closure (EDQNM) provides an ingenious patch to the issue, the reason why the QN theory fails so badly is not so clear. Are closures necessarily either trivial or ad-hoc, when proxies for true ensemble averages are taken to be Gaussian ? The purpose of the talk is to answer negatively, using the lights of a new ``optimal closure framework'' recently exposed by [Turkington,2013]. For turbulence problems, the optimal closure allows a consistent use of a Gaussian Ansatz (and corresponding vanishing third cumulant) that also retains an intrinsic damping. The key to this apparent paradox lies in a clear distinction between the true ensemble averages and their proxies, most easily grasped provided one uses the Liouville equation as a starting point, rather than the cumulant hierarchy. Schematically said, closure is achieved by minimizing a lack-of-fit residual, which retains the intrinsic features of the true dynamics. The optimal closure is not restricted to the Gaussian modeling. Yet, for the sake of clarity, I will discuss the optimal closure on a problem where it can be entirely implemented, and compared to DNS : the relaxation of an arbitrarily far from equilibrium energy shell towards the Gibbs equilibrium for truncated Euler dynamics. Predictive

  17. Temporal scaling behavior of forest and urban fires

    Wang, J.; Song, W.; Zheng, H.; Telesca, L.

    2009-04-01

    It has been found that many natural systems are characterized by scaling behavior. In such systems natural factors dominate the event dynamics. Forest fires in different countries have been found to exhibit frequency-size power law over many orders of magnitude and with similar value of parameters. But in countries with high population density such as China and Japan, more than 95% of the forest fire disasters are caused by human activities. Furthermore, with the development of society, the wildland-urban interface (WUI) area is becoming more and more populated, and the forest fire is much connected with urban fire. Therefore exploring the scaling behavior of fires dominated by human-related factors is very challenging. The present paper explores the temporal scaling behavior of forest fires and urban fires in Japan with mathematical methods. Two factors, Allan factor (AF) and Fano factor (FF) are used to investigate time-scaling of fire systems. It is found that the FF for both forest fires and urban fires increases linearly in log-log scales, and this indicates that it behaves as a power-law for all the investigated timescales. From the AF plot a 7 days cycle is found, which indicates a weekly cycle. This may be caused by human activities which has a weekly periodicity because on weekends people usually have more outdoor activities, which may cause more hidden trouble of fire disasters. Our findings point out that although the human factors are the main cause, both the forest fires and urban fires exhibit time-scaling behavior. At the same time, the scaling exponents for urban fires are larger than forest fires, signifying a more intense clustering. The reason may be that fires are affected not only by weather condition, but also by human activities, which play a more important role for urban fires than forest fires and have a power law distribution and scaling behavior. Then some work is done to the relative humidity. Similar distribution law characterizes the

  18. Collective memory in primate conflict implied by temporal scaling collapse.

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  19. Optimal Scale Edge Detection Utilizing Noise within Images

    Adnan Khashman

    2003-04-01

    Full Text Available Edge detection techniques have common problems that include poor edge detection in low contrast images, speed of recognition and high computational cost. An efficient solution to the edge detection of objects in low to high contrast images is scale space analysis. However, this approach is time consuming and computationally expensive. These expenses can be marginally reduced if an optimal scale is found in scale space edge detection. This paper presents a new approach to detecting objects within images using noise within the images. The novel idea is based on selecting one optimal scale for the entire image at which scale space edge detection can be applied. The selection of an ideal scale is based on the hypothesis that "the optimal edge detection scale (ideal scale depends on the noise within an image". This paper aims at providing the experimental evidence on the relationship between the optimal scale and the noise within images.

  20. Optimal decision procedures for satisfiability in fragments of alternating-time temporal logics

    Goranko, Valentin; Vester, Steen

    2014-01-01

    We consider several natural fragments of the alternating-time temporal logics ATL*and ATL with restrictions on the nesting between temporal operators and strate-gicquantifiers. We develop optimal decision procedures for satisfiability in these fragments, showing that they have much lower complexi...

  1. Optimized LTE Cell Planning with Varying Spatial and Temporal User Densities

    Ghazzai, Hakim; Yaacoub, Elias; Alouini, Mohamed-Slim; Dawy, Zaher; Abu Dayya, Adnan

    2015-01-01

    Base station deployment in cellular networks is one of the fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation (4G) cellular networks using meta-heuristic algorithms. In this approach, we aim to satisfy both cell coverage and capacity constraints simultaneously by formulating an optimization problem that captures practical planning aspects. The starting point of the planning process is defined through a dimensioning exercise that captures both coverage and capacity constraints. Afterwards, we implement a meta-heuristic algorithm based on swarm intelligence (e.g., particle swarm optimization or the recently-proposed grey wolf optimizer) to find suboptimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different spatial user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We also perform Monte Carlo simulations to study the performance of the proposed scheme and compute the average number of users in outage. Next, the problems of green planning with regards to temporal traffic variation and planning with location constraints due to tight limits on electromagnetic radiations are addressed, using the proposed method. Finally, in our simulation results, we apply our proposed approach for different scenarios with different subareas and user distributions and show that the desired network quality of service targets are always reached even for large-scale problems.

  2. Optimized LTE Cell Planning with Varying Spatial and Temporal User Densities

    Ghazzai, Hakim

    2015-03-09

    Base station deployment in cellular networks is one of the fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation (4G) cellular networks using meta-heuristic algorithms. In this approach, we aim to satisfy both cell coverage and capacity constraints simultaneously by formulating an optimization problem that captures practical planning aspects. The starting point of the planning process is defined through a dimensioning exercise that captures both coverage and capacity constraints. Afterwards, we implement a meta-heuristic algorithm based on swarm intelligence (e.g., particle swarm optimization or the recently-proposed grey wolf optimizer) to find suboptimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different spatial user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We also perform Monte Carlo simulations to study the performance of the proposed scheme and compute the average number of users in outage. Next, the problems of green planning with regards to temporal traffic variation and planning with location constraints due to tight limits on electromagnetic radiations are addressed, using the proposed method. Finally, in our simulation results, we apply our proposed approach for different scenarios with different subareas and user distributions and show that the desired network quality of service targets are always reached even for large-scale problems.

  3. Validation of the Temporal Satisfaction with Life Scale in a Sample of Chinese University Students

    Ye, Shengquan

    2007-01-01

    The study aims at validating the Temporal Satisfaction With Life Scale (TSWLS; Pavot et al., 1998, "The Temporal Satisfaction With Life Scale", Journal of Personality Assessment 70, pp. 340-354) in a non-western context. Data from 646 Chinese university students (330 females and 316 males) supported the three-factor structure of the…

  4. Heat and mass transfer intensification and shape optimization a multi-scale approach

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  5. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  6. Exploiting tumor shrinkage through temporal optimization of radiotherapy

    Unkelbach, Jan; Craft, David; Hong, Theodore; Papp, Dávid; Wolfgang, John; Bortfeld, Thomas; Ramakrishnan, Jagdish; Salari, Ehsan

    2014-01-01

    In multi-stage radiotherapy, a patient is treated in several stages separated by weeks or months. This regimen has been motivated mostly by radiobiological considerations, but also provides an approach to reduce normal tissue dose by exploiting tumor shrinkage. The paper considers the optimal design of multi-stage treatments, motivated by the clinical management of large liver tumors for which normal liver dose constraints prohibit the administration of an ablative radiation dose in a single treatment. We introduce a dynamic tumor model that incorporates three factors: radiation induced cell kill, tumor shrinkage, and tumor cell repopulation. The design of multi-stage radiotherapy is formulated as a mathematical optimization problem in which the total dose to the normal tissue is minimized, subject to delivering the prescribed dose to the tumor. Based on the model, we gain insight into the optimal administration of radiation over time, i.e. the optimal treatment gaps and dose levels. We analyze treatments consisting of two stages in detail. The analysis confirms the intuition that the second stage should be delivered just before the tumor size reaches a minimum and repopulation overcompensates shrinking. Furthermore, it was found that, for a large range of model parameters, approximately one-third of the dose should be delivered in the first stage. The projected benefit of multi-stage treatments in terms of normal tissue sparing depends on model assumptions. However, the model predicts large dose reductions by more than a factor of 2 for plausible model parameters. The analysis of the tumor model suggests that substantial reduction in normal tissue dose can be achieved by exploiting tumor shrinkage via an optimal design of multi-stage treatments. This suggests taking a fresh look at multi-stage radiotherapy for selected disease sites where substantial tumor regression translates into reduced target volumes. (paper)

  7. Optimal Product Variety, Scale Effects and Growth

    de Groot, H.L.F.; Nahuis, R.

    1997-01-01

    We analyze the social optimality of growth and product variety in a model of endogenous growth. The model contains two sectors, one assembly sector producing a homogenous consumption good, and one intermediate goods sector producing a differentiated input used in the assembly sector. Growth results

  8. Optimization of Large-Scale Structural Systems

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...

  9. Large scale stochastic spatio-temporal modelling with PCRaster

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  10. Effect of spatial and temporal scales on habitat suitability modeling: A case study of Ommastrephes bartramii in the northwest pacific ocean

    Gong, Caixia; Chen, Xinjun; Gao, Feng; Tian, Siquan

    2014-12-01

    Temporal and spatial scales play important roles in fishery ecology, and an inappropriate spatio-temporal scale may result in large errors in modeling fish distribution. The objective of this study is to evaluate the roles of spatio-temporal scales in habitat suitability modeling, with the western stock of winter-spring cohort of neon flying squid ( Ommastrephes bartramii) in the northwest Pacific Ocean as an example. In this study, the fishery-dependent data from the Chinese Mainland Squid Jigging Technical Group and sea surface temperature (SST) from remote sensing during August to October of 2003-2008 were used. We evaluated the differences in a habitat suitability index model resulting from aggregating data with 36 different spatial scales with a combination of three latitude scales (0.5°, 1° and 2°), four longitude scales (0.5°, 1°, 2° and 4°), and three temporal scales (week, fortnight, and month). The coefficients of variation (CV) of the weekly, biweekly and monthly suitability index (SI) were compared to determine which temporal and spatial scales of SI model are more precise. This study shows that the optimal temporal and spatial scales with the lowest CV are month, and 0.5° latitude and 0.5° longitude for O. bartramii in the northwest Pacific Ocean. This suitability index model developed with an optimal scale can be cost-effective in improving forecasting fishing ground and requires no excessive sampling efforts. We suggest that the uncertainty associated with spatial and temporal scales used in data aggregations needs to be considered in habitat suitability modeling.

  11. Spatially and Temporally Optimal Biomass Procurement Contracting for Biorefineries

    Subbu Kumarappan

    2014-02-01

    Full Text Available This paper evaluates the optimal composition of annual and perennial biomass feedstocks for a biorefinery. A generic optimization model is built to minimize costs – harvest, transport, storage, seasonal, and environmental costs – subject to various constraints on land availability, feedstock availability, processing capacity, contract terms, and storage losses. The model results are demonstrated through a case study for a midwestern U.S. location, focusing on bioethanol as the likely product. The results suggest that high-yielding energy crops feature prominently (70 to 80% in the feedstock mix in spite of the higher establishment costs. The cost of biomass ranges from 0.16 to 0.20 $ l-1 (US$ 0.60 to $0.75 per gallon of biofuel. The harvest shed shows that high-yielding energy crops are preferably grown in fields closer to the biorefinery. Low-yielding agricultural residues primarily serve as a buffer crop to meet the shortfall in biomass requirement. For the case study parameters, the model results estimated a price premium for energy crops (2 to 4 $ t-1 within a 16 km (10-mile radius and agricultural residues (5 to 17 $ t-1 in a 16 to 20 km (10 to 20 mile radius.

  12. Dimensional Scaling for Optimized CMUT Operations

    Lei, Anders; Diederichsen, Søren Elmin; la Cour, Mette Funding

    2014-01-01

    This work presents a dimensional scaling study using numerical simulations, where gap height and plate thickness of a CMUT cell is varied, while the lateral plate dimension is adjusted to maintain a constant transmit immersion center frequency of 5 MHz. Two cell configurations have been simulated...

  13. Integrating cross-scale analysis in the spatial and temporal domains for classification of behavioral movement

    Ali Soleymani

    2014-06-01

    Full Text Available Since various behavioral movement patterns are likely to be valid within different, unique ranges of spatial and temporal scales (e.g., instantaneous, diurnal, or seasonal with the corresponding spatial extents, a cross-scale approach is needed for accurate classification of behaviors expressed in movement. Here, we introduce a methodology for the characterization and classification of behavioral movement data that relies on computing and analyzing movement features jointly in both the spatial and temporal domains. The proposed methodology consists of three stages. In the first stage, focusing on the spatial domain, the underlying movement space is partitioned into several zonings that correspond to different spatial scales, and features related to movement are computed for each partitioning level. In the second stage, concentrating on the temporal domain, several movement parameters are computed from trajectories across a series of temporal windows of increasing sizes, yielding another set of input features for the classification. For both the spatial and the temporal domains, the ``reliable scale'' is determined by an automated procedure. This is the scale at which the best classification accuracy is achieved, using only spatial or temporal input features, respectively. The third stage takes the measures from the spatial and temporal domains of movement, computed at the corresponding reliable scales, as input features for behavioral classification. With a feature selection procedure, the most relevant features contributing to known behavioral states are extracted and used to learn a classification model. The potential of the proposed approach is demonstrated on a dataset of adult zebrafish (Danio rerio swimming movements in testing tanks, following exposure to different drug treatments. Our results show that behavioral classification accuracy greatly increases when firstly cross-scale analysis is used to determine the best analysis scale, and

  14. Temporal self-similar synchronization patterns and scaling in ...

    Repulsively coupled oscillators; synchronization patterns; self-similar ... system, one expects multistable behavior in analogy to ..... More about the scaling relation between the long-period ... The third type of representation of phases is via.

  15. Maximum length scale in density based topology optimization

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  16. Spatial connections in regional climate model rainfall outputs at different temporal scales: Application of network theory

    Naufan, Ihsan; Sivakumar, Bellie; Woldemeskel, Fitsum M.; Raghavan, Srivatsan V.; Vu, Minh Tue; Liong, Shie-Yui

    2018-01-01

    Understanding the spatial and temporal variability of rainfall has always been a great challenge, and the impacts of climate change further complicate this issue. The present study employs the concepts of complex networks to study the spatial connections in rainfall, with emphasis on climate change and rainfall scaling. Rainfall outputs (during 1961-1990) from a regional climate model (i.e. Weather Research and Forecasting (WRF) model that downscaled the European Centre for Medium-range Weather Forecasts, ECMWF ERA-40 reanalyses) over Southeast Asia are studied, and data corresponding to eight different temporal scales (6-hr, 12-hr, daily, 2-day, 4-day, weekly, biweekly, and monthly) are analyzed. Two network-based methods are applied to examine the connections in rainfall: clustering coefficient (a measure of the network's local density) and degree distribution (a measure of the network's spread). The influence of rainfall correlation threshold (T) on spatial connections is also investigated by considering seven different threshold levels (ranging from 0.5 to 0.8). The results indicate that: (1) rainfall networks corresponding to much coarser temporal scales exhibit properties similar to that of small-world networks, regardless of the threshold; (2) rainfall networks corresponding to much finer temporal scales may be classified as either small-world networks or scale-free networks, depending upon the threshold; and (3) rainfall spatial connections exhibit a transition phase at intermediate temporal scales, especially at high thresholds. These results suggest that the most appropriate model for studying spatial connections may often be different at different temporal scales, and that a combination of small-world and scale-free network models might be more appropriate for rainfall upscaling/downscaling across all scales, in the strict sense of scale-invariance. The results also suggest that spatial connections in the studied rainfall networks in Southeast Asia are

  17. Topology optimization for nano-scale heat transfer

    Evgrafov, Anton; Maute, Kurt; Yang, Ronggui

    2009-01-01

    We consider the problem of optimal design of nano-scale heat conducting systems using topology optimization techniques. At such small scales the empirical Fourier's law of heat conduction no longer captures the underlying physical phenomena because the mean-free path of the heat carriers, phonons...... in our case, becomes comparable with, or even larger than, the feature sizes of considered material distributions. A more accurate model at nano-scales is given by kinetic theory, which provides a compromise between the inaccurate Fourier's law and precise, but too computationally expensive, atomistic...

  18. Resource Communication. Temporal optimization of fuel treatment design in blue gum (Eucalyptus globulus plantations

    Ana Martin

    2016-07-01

    Material and methods: At each of four temporal stages (2015-2018-2021-2024 we simulated: (1 surface and canopy fuels, timber volume (m3 ha-1 and carbon storage (Mg ha-1; (2 fire behaviour characteristics, i.e. rate of spread (m min-1, and flame length (m, with FlamMap fire modelling software; (3 optimal treatment locations as determined by the Landscape Treatment Designer (LTD. Main results: The higher pressure of fire behaviour in the earlier stages of the study period triggered most of the spatial fuel treatments within eucalypt plantations in a juvenile stage. At later stages fuel treatments also included shrublands areas. The results were consistent with observations and simulation results that show high fire hazard in juvenile eucalypt stands. Research highlights: Forest management planning in commercial eucalypt plantations can potentially accomplish multiple objectives such as augmenting profits and sustaining ecological assets while reducing wildfire risk at landscape scale. However, limitations of simulation models including FlamMap and LTD are important to recognise in studies of long term wildfire management strategies. Keywords: Eucalypt plantations; Fire hazard; FlamMap; fuel treatment optimisation; Landscape Treatment Designer; wildfire risk management.

  19. Optimal defense resource allocation in scale-free networks

    Zhang, Xuejun; Xu, Guoqiang; Xia, Yongxiang

    2018-02-01

    The robustness research of networked systems has drawn widespread attention in the past decade, and one of the central topics is to protect the network from external attacks through allocating appropriate defense resource to different nodes. In this paper, we apply a specific particle swarm optimization (PSO) algorithm to optimize the defense resource allocation in scale-free networks. Results reveal that PSO based resource allocation shows a higher robustness than other resource allocation strategies such as uniform, degree-proportional, and betweenness-proportional allocation strategies. Furthermore, we find that assigning less resource to middle-degree nodes under small-scale attack while more resource to low-degree nodes under large-scale attack is conductive to improving the network robustness. Our work provides an insight into the optimal defense resource allocation pattern in scale-free networks and is helpful for designing a more robust network.

  20. Optimization of large-scale industrial systems : an emerging method

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  1. Frontal Neurons Modulate Memory Retrieval across Widely Varying Temporal Scales

    Zhang, Wen-Hua; Williams, Ziv M.

    2015-01-01

    Once a memory has formed, it is thought to undergo a gradual transition within the brain from short- to long-term storage. This putative process, however, also poses a unique problem to the memory system in that the same learned items must also be retrieved across broadly varying time scales. Here, we find that neurons in the ventrolateral…

  2. Choosing appropriate temporal and spatial scales for ecological ...

    Unknown

    region of the world, which he called a “biome”, had a natural plant .... 143) sums up the current world view in ecology quite bluntly: The idea ..... considerations of spatial scale, however, management practices .... not have the seed bank to respond to a historically ... predictive knowledge about the workings of natural systems ...

  3. Temporal and Spatial Scales of Labrador Sea Water Formation

    Clarke, R. A.

    1984-01-01

    Labrador Sea Water is an intermediate water found at the same density and depth range in the North Atlantic as the Mediterranean water. It is formed by convection from the sea surface to depths greather than 2 km in winter in the Western Labrador Sea. The processes leading to deep convection begin with the formation of a 200 km scale cyclonic circulation about denser than average upper layer water in the Western Labrador Sea. This circulation pattern is hypothesized to be driven by an ocean/atmosphere heat exchange that has its maximum in this region. By early March, if deep convection is taking place, one sees that this body of denser upper waters penetrates to the top of the deep temperature/salinity maximum marking the core of the North Atlantic Deep Water. We note that the horizontal scale of this body is still 100-200 km normal to the coastline.

  4. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  5. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  6. An operational ensemble prediction system for catchment rainfall over eastern Africa spanning multiple temporal and spatial scales

    Riddle, E. E.; Hopson, T. M.; Gebremichael, M.; Boehnert, J.; Broman, D.; Sampson, K. M.; Rostkier-Edelstein, D.; Collins, D. C.; Harshadeep, N. R.; Burke, E.; Havens, K.

    2017-12-01

    While it is not yet certain how precipitation patterns will change over Africa in the future, it is clear that effectively managing the available water resources is going to be crucial in order to mitigate the effects of water shortages and floods that are likely to occur in a changing climate. One component of effective water management is the availability of state-of-the-art and easy to use rainfall forecasts across multiple spatial and temporal scales. We present a web-based system for displaying and disseminating ensemble forecast and observed precipitation data over central and eastern Africa. The system provides multi-model rainfall forecasts integrated to relevant hydrological catchments for timescales ranging from one day to three months. A zoom-in features is available to access high resolution forecasts for small-scale catchments. Time series plots and data downloads with forecasts, recent rainfall observations and climatological data are available by clicking on individual catchments. The forecasts are calibrated using a quantile regression technique and an optimal multi-model forecast is provided at each timescale. The forecast skill at the various spatial and temporal scales will discussed, as will current applications of this tool for managing water resources in Sudan and optimizing hydropower operations in Ethiopia and Tanzania.

  7. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  8. HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler

    Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.

    2012-01-01

    HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.

  9. Topology Optimization of Large Scale Stokes Flow Problems

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  10. Optimization of rainfall networks using information entropy and temporal variability analysis

    Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-04-01

    Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.

  11. Temporal flexibility and careers: The role of large-scale organizations for physicians

    Forrest Briscoe

    2006-01-01

    Temporal flexibility and careers: The role of large-scale organizations for physicians. Forrest Briscoe Briscoe This study investigates how employment in large-scale organizations affects the work lives of practicing physicians. Well-established theory associates larger organizations with bureaucratic constraint, loss of workplace control, and dissatisfaction, but this author finds that large scale is also associated with greater schedule and career flexibility. Ironically, the bureaucratic p...

  12. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  13. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that

  14. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  15. Optimization of laboratory scale production and purification of ...

    Microcystin content is however highly variable and optimised culture conditions are essential to produce viable yields of microcystin for purification. We describe the optimization of culture conditions and evaluation of various purification methods to enhance the yield of microcystin from laboratory scale culture.

  16. Resource Communication. Temporal optimization of fuel treatment design in blue gum (Eucalyptus globulus) plantations

    Martin, A.; Botequim, B.; Oliveira, T.M.; Ager, A.; Pirotti, F.

    2016-07-01

    Aim of the study: This study was conducted to support fire and forest management planning in eucalypt plantations based on economic, ecological and fire prevention criteria, with a focus on strategic prioritisation of fuel treatments over time. The central objective was to strategically locate fuel treatments to minimise losses from wildfire while meeting budget constraints and demands for wood supply for the pulp industry and conserving carbon. Area of study: The study area was located in Serra do Socorro (Torres Vedras, Portugal, covering ~1449 ha) of predominantly Eucalyptus globulus Labill forests managedcultivated for pulpwood by The Navigator Company. Material and methods: At each of four temporal stages (2015-2018-2021-2024) we simulated: (1) surface and canopy fuels, timber volume (m3 ha-1) and carbon storage (Mg ha-1); (2) fire behaviour characteristics, i.e. rate of spread (m min-1), and flame length (m), with FlamMap fire modelling software; (3) optimal treatment locations as determined by the Landscape Treatment Designer (LTD). Main results: The higher pressure of fire behaviour in the earlier stages of the study period triggered most of the spatial fuel treatments within eucalypt plantations in a juvenile stage. At later stages fuel treatments also included shrublands areas. The results were consistent with observations and simulation results that show high fire hazard in juvenile eucalypt stands. Research highlights: Forest management planning in commercial eucalypt plantations can potentially accomplish multiple objectives such as augmenting profits and sustaining ecological assets while reducing wildfire risk at landscape scale. However, limitations of simulation models including FlamMap and LTD are important to recognise in studies of long term wildfire management strategies. (Author)

  17. The trend of the multi-scale temporal variability of precipitation in Colorado River Basin

    Jiang, P.; Yu, Z.

    2011-12-01

    Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.

  18. Network synchronization: optimal and pessimal scale-free topologies

    Donetti, Luca [Departamento de Electronica y Tecnologia de Computadores and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hurtado, Pablo I; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto Carlos I de Fisica Teorica y Computacional Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain)], E-mail: mamunoz@onsager.ugr.es

    2008-06-06

    By employing a recently introduced optimization algorithm we construct optimally synchronizable (unweighted) networks for any given scale-free degree distribution. We explore how the optimization process affects degree-degree correlations and observe a generic tendency toward disassortativity. Still, we show that there is not a one-to-one correspondence between synchronizability and disassortativity. On the other hand, we study the nature of optimally un-synchronizable networks, that is, networks whose topology minimizes the range of stability of the synchronous state. The resulting 'pessimal networks' turn out to have a highly assortative string-like structure. We also derive a rigorous lower bound for the Laplacian eigenvalue ratio controlling synchronizability, which helps understanding the impact of degree correlations on network synchronizability.

  19. Network synchronization: optimal and pessimal scale-free topologies

    Donetti, Luca; Hurtado, Pablo I; Munoz, Miguel A

    2008-01-01

    By employing a recently introduced optimization algorithm we construct optimally synchronizable (unweighted) networks for any given scale-free degree distribution. We explore how the optimization process affects degree-degree correlations and observe a generic tendency toward disassortativity. Still, we show that there is not a one-to-one correspondence between synchronizability and disassortativity. On the other hand, we study the nature of optimally un-synchronizable networks, that is, networks whose topology minimizes the range of stability of the synchronous state. The resulting 'pessimal networks' turn out to have a highly assortative string-like structure. We also derive a rigorous lower bound for the Laplacian eigenvalue ratio controlling synchronizability, which helps understanding the impact of degree correlations on network synchronizability

  20. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  1. Computational optimization of catalyst distributions at the nano-scale

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  2. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  3. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  4. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    Gao, Shi

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  5. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  6. Simplified Summative Temporal Bone Dissection Scale Demonstrates Equivalence to Existing Measures.

    Pisa, Justyn; Gousseau, Michael; Mowat, Stephanie; Westerberg, Brian; Unger, Bert; Hochman, Jordan B

    2018-01-01

    Emphasis on patient safety has created the need for quality assessment of fundamental surgical skills. Existing temporal bone rating scales are laborious, subject to evaluator fatigue, and contain inconsistencies when conferring points. To address these deficiencies, a novel binary assessment tool was designed and validated against a well-established rating scale. Residents completed a mastoidectomy with posterior tympanotomy on identical 3D-printed temporal bone models. Four neurotologists evaluated each specimen using a validated scale (Welling) and a newly developed "CanadaWest" scale, with scoring repeated after a 4-week interval. Nineteen participants were clustered into junior, intermediate, and senior cohorts. An ANOVA found significant differences between performance of the junior-intermediate and junior-senior cohorts for both Welling and CanadaWest scales ( P .05). Cohen's kappa found strong intrarater reliability (0.711) with a high degree of interrater reliability of (0.858) for the CanadaWest scale, similar to scores on the Welling scale of (0.713) and (0.917), respectively. The CanadaWest scale was facile and delineated performance by experience level with strong intrarater reliability. Comparable to the validated Welling Scale, it distinguished junior from senior trainees but was challenged in differentiating intermediate and senior trainee performance.

  7. Exploring quantum control landscapes: Topology, features, and optimization scaling

    Moore, Katharine W.; Rabitz, Herschel

    2011-01-01

    Quantum optimal control experiments and simulations have successfully manipulated the dynamics of systems ranging from atoms to biomolecules. Surprisingly, these collective works indicate that the effort (i.e., the number of algorithmic iterations) required to find an optimal control field appears to be essentially invariant to the complexity of the system. The present work explores this matter in a series of systematic optimizations of the state-to-state transition probability on model quantum systems with the number of states N ranging from 5 through 100. The optimizations occur over a landscape defined by the transition probability as a function of the control field. Previous theoretical studies on the topology of quantum control landscapes established that they should be free of suboptimal traps under reasonable physical conditions. The simulations in this work include nearly 5000 individual optimization test cases, all of which confirm this prediction by fully achieving optimal population transfer of at least 99.9% on careful attention to numerical procedures to ensure that the controls are free of constraints. Collectively, the simulation results additionally show invariance of required search effort to system dimension N. This behavior is rationalized in terms of the structural features of the underlying control landscape. The very attractive observed scaling with system complexity may be understood by considering the distance traveled on the control landscape during a search and the magnitude of the control landscape slope. Exceptions to this favorable scaling behavior can arise when the initial control field fluence is too large or when the target final state recedes from the initial state as N increases.

  8. Using a weather generator to downscale spatio-temporal precipitation at urban scale

    Sørup, Hjalte Jomo Danielsen; Christensen, Ole Bøssing; Arnbjerg-Nielsen, Karsten

    In recent years, urban flooding has occurred in Denmark due to very local extreme precipitation events with very short lifetime. Several of these floods have been among the most severe ever experienced. The current study demonstrates the applicability of the Spatio-Temporal Neyman-Scott Rectangular...... the observed spatio-temporal differences at very fine scale for all measured parameters. For downscaling, perturbation with a climate change signal, precipitation from four different regional climate model simulations has been analysed. The analysed models are two runs from the ENSEMBLES (RACMO...

  9. Spatio-temporal modelling of electrical supply systems to optimize the site planning process for the "power to mobility" technology

    Karl, Florian; Zink, Roland

    2016-04-01

    The transformation of the energy sector towards decentralized renewable energies (RE) requires also storage systems to ensure security of supply. The new "Power to Mobility" (PtM) technology is one potential solution to use electrical overproduction to produce methane for i.e. gas vehicles. Motivated by these fact, the paper presents a methodology for a GIS-based temporal modelling of the power grid, to optimize the site planning process for the new PtM-technology. The modelling approach is based on a combination of the software QuantumGIS for the geographical and topological energy supply structure and OpenDSS for the net modelling. For a case study (work in progress) of the city of Straubing (Lower Bavaria) the parameters of the model are quantified. The presentation will discuss the methodology as well as the first results with a view to the application on a regional scale.

  10. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  11. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  12. Airfoil optimization for noise emission problem on small scale turbines

    Gocmen, Tuhfe; Ozerdem, Baris [Mechanical Engineering Department, Yzmir Institute of Technology (Turkey)

    2011-07-01

    Wind power is a preferred natural resource and has had benefits for the energy industry and for the environment all over the world. However, noise emission from wind turbines is becoming a major concern today. This study paid close attention to small scale wind turbines close to urban areas and proposes an optimum number of six airfoils to address noise emission concerns and performance criteria. The optimization process aimed to decrease the noise emission levels and enhance the aerodynamic performance of a small scale wind turbine. This study determined the sources and the operating conditions of broadband noise emissions. A new design is presented which enhances aerodynamic performance and at the same time reduces airfoil self noise. It used popular aerodynamic functions and codes based on aero-acoustic empirical models. Through numerical computations and analyses, it is possible to derive useful improvements that can be made to commercial airfoils for small scale wind turbines.

  13. Modified temporal approach to meta-optimizing an extended Kalman filter's parameters

    Salmon

    2014-07-01

    Full Text Available stream_source_info Salmon_2014.pdf.txt stream_content_type text/plain stream_size 1233 Content-Encoding UTF-8 stream_name Salmon_2014.pdf.txt Content-Type text/plain; charset=UTF-8 2014 IEEE International Geoscience... and Remote Sensing Symposium, Québec, Canada, 13-18 July 2014 A modified temporal approach to meta-optimizing an Extended Kalman Filter's parameters B. P. Salmon ; W. Kleynhans ; J. C. Olivier ; W. C. Olding ; K. J. Wessels ; F. van den Bergh...

  14. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  15. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  16. Particle swarm optimization with scale-free interactions.

    Chen Liu

    Full Text Available The particle swarm optimization (PSO algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks.

  17. Differential Spatio-temporal Multiband Satellite Image Clustering using K-means Optimization With Reinforcement Programming

    Irene Erlyn Wina Rachmawan

    2015-06-01

    Full Text Available Deforestration is one of the crucial issues in Indonesia because now Indonesia has world's highest deforestation rate. In other hand, multispectral image delivers a great source of data for studying spatial and temporal changeability of the environmental such as deforestration area. This research present differential image processing methods for detecting nature change of deforestration. Our differential image processing algorithms extract and indicating area automatically. The feature of our proposed idea produce extracted information from multiband satellite image and calculate the area of deforestration by years with calculating data using temporal dataset. Yet, multiband satellite image consists of big data size that were difficult to be handled for segmentation. Commonly, K- Means clustering is considered to be a powerfull clustering algorithm because of its ability to clustering big data. However K-Means has sensitivity of its first generated centroids, which could lead into a bad performance. In this paper we propose a new approach to optimize K-Means clustering using Reinforcement Programming in order to clustering multispectral image. We build a new mechanism for generating initial centroids by implementing exploration and exploitation knowledge from Reinforcement Programming. This optimization will lead a better result for K-means data cluster. We select multispectral image from Landsat 7 in past ten years in Medawai, Borneo, Indonesia, and apply two segmentation areas consist of deforestration land and forest field. We made series of experiments and compared the experimental results of K-means using Reinforcement Programming as optimizing initiate centroid and normal K-means without optimization process. Keywords: Deforestration, Multispectral images, landsat, automatic clustering, K-means.

  18. Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.

    Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen

    2017-02-01

    Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.

  19. Long term socio-ecological research across temporal and spatial scales

    Singh, S. J.; Haberl, H.

    2012-04-01

    Long term socio-ecological research across temporal and spatial scales Simron Jit Singh and Helmut Haberl Institute of Social Ecology, Vienna, Austria Understanding trajectories of change in coupled socio-ecological (or human-environment) systems requires monitoring and analysis at several spatial and temporal scales. Long-term ecosystem research (LTER) is a strand of research coupled with observation systems and infrastructures (LTER sites) aimed at understanding how global change affects ecosystems around the world. In recent years it has been increasingly recognized that sustainability concerns require extending this approach to long-term socio-ecological research, i.e. a more integrated perspective that focuses on interaction processes between society and ecosystems over longer time periods. Thus, Long-Term Socio-Ecological Research, abbreviated LTSER, aims at observing, analyzing, understanding and modelling of changes in coupled socio-ecological systems over long periods of time. Indeed, the magnitude of the problems we now face is an outcome of a much longer process, accelerated by industrialisation since the nineteenth century. The paper will provide an overview of a book (in press) on LTSER with particular emphasis on 'socio-ecological transitions' in terms of material, energy and land use dynamics across temporal and spatial scales.

  20. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  1. Spatial-temporal noise reduction method optimized for real-time implementation

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-02-01

    Image de-noising in the spatial-temporal domain has been a problem studied in-depth in the field of digital image processing. However complexity of algorithms often leads to high hardware resource usage, or computational complexity and memory bandwidth issues, making their practical use impossible. In our research we attempt to solve these issues with an optimized implementation of a practical spatial-temporal de-noising algorithm. Spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics and reduce memory bandwidth requirements. The proposed algorithm efficiently removes different kinds of noise in a wide range of signal to noise ratios. In our algorithm the local motion compensation is performed in Bayer RAW data space, while preserving the resolution and effectively improving the signal to noise ratios of moving objects. The main challenge for the use of spatial-temporal noise reduction algorithms in video applications is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. In photo and video applications it is very important that moving objects should stay sharp, while the noise is efficiently removed in both the static background and moving objects. Another important use case is the case when background is also non-static as well as the foreground where objects are also moving. Taking into account the achievable improvement in PSNR (on the level of the best known noise reduction techniques, like VBM3D) and low algorithmic complexity, enabling its practical use in commercial video applications, the results of our research can be very valuable.

  2. Spatial connectivity, scaling, and temporal trajectories as emergent urban stormwater impacts

    Jovanovic, T.; Gironas, J. A.; Hale, R. L.; Mejia, A.

    2016-12-01

    Urban watersheds are structurally complex systems comprised of multiple components (e.g., streets, pipes, ponds, vegetated swales, wetlands, riparian corridors, etc.). These multiple engineered components interact in unanticipated and nontrivial ways with topographic conditions, climate variability, land use/land cover changes, and the underlying eco-hydrogeomorphic dynamics. Such interactions can result in emergent urban stormwater impacts with cascading effects that can negatively influence the overall functioning of the urban watershed. For example, the interaction among many detention ponds has been shown, in some situations, to synchronize flow volumes and ultimately lead to downstream flow amplifications and increased pollutant mobilization. Additionally, interactions occur at multiple temporal and spatial scales requiring that urban stormwater dynamics be represented at the long-term temporal (decadal) and across spatial scales (from the single lot to the watershed scale). In this study, we develop and implement an event-based, high-resolution, network hydro-engineering model (NHEM), and demonstrate an approach to reconstruct the long-term regional infrastructure and land use/land cover conditions of an urban watershed. As the study area, we select an urban watershed in the metropolitan area of Scottsdale, Arizona. Using the reconstructed landscapes to drive the NHEM, we find that distinct surficial, hydrologic connectivity patterns result from the intersection of hydrologic processes, infrastructure, and land use/land cover arrangements. These spatial patters, in turn, exhibit scaling characteristics. For example, the scaling of urban watershed dispersion mechanisms shows altered scaling exponents with respect to pre-urban conditions. For example, the scaling exponent associated with geomorphic dispersion tends to increase for urban conditions, reflecting increased surficial path heterogeneity. Both the connectivity and scaling results can be used to

  3. Individuality that is unheard of: systematic temporal deviations in scale playing leave an inaudible pianistic fingerprint

    Floris Tijmen Van Vugt

    2013-03-01

    Full Text Available Whatever we do, we do it in our own way, and we recognise master artists by small samples of their work. This study investigates individuality of temporal deviations in musical scales in pianists in the absence of deliberate expressive intention. Note-by-note timing deviations away from regularity form a remarkably consistent "pianistic fingerprint". First, 8 professional pianists played C-major scales in two sessions, separated by fifteen minutes. Euclidian distances between deviation traces originating from different pianists were reliably larger than traces originating from the same pianist. As a result, a simple classifier that matched deviation traces by minimising their distance was able to recognise each pianist with 100% accuracy. Furthermore, within each pianist, fingerprints produced by the same movements were more similar than fingerprints resulting in the same scale sound. This allowed us to conclude that the fingerprints are mostly neuromuscular rather than intentional or expressive in nature. However, human listeners were not able to distinguish the temporal fingerprints by ear. Next, 18 pianists played C-major scales on a normal or muted piano. Recognition rates ranged from 83% to 100%, further supporting the view that auditory feedback is not implicated in the creation of the temporal signature. Finally, 20 pianists were recognised 20 months later at above-chance level, showing signature effects to be long lasting. Our results indicate that even non-expressive playing of scales reveals consistent, partially effector-unspecific, but inaudible inter-individual differences. We suggest that machine learning studies into individuality in performance will need to take into account unintentional but consistent variability below the perceptual threshold.

  4. Optimizing Cruising Routes for Taxi Drivers Using a Spatio-Temporal Trajectory Model

    Liang Wu

    2017-11-01

    Full Text Available Much of the taxi route-planning literature has focused on driver strategies for finding passengers and determining the hot spot pick-up locations using historical global positioning system (GPS trajectories of taxis based on driver experience, distance from the passenger drop-off location to the next passenger pick-up location and the waiting times at recommended locations for the next passenger. The present work, however, considers the average taxi travel speed mined from historical taxi GPS trajectory data and the allocation of cruising routes to more than one taxi driver in a small-scale region to neighboring pick-up locations. A spatio-temporal trajectory model with load balancing allocations is presented to not only explore pick-up/drop-off information but also provide taxi drivers with cruising routes to the recommended pick-up locations. In simulation experiments, our study shows that taxi drivers using cruising routes recommended by our spatio-temporal trajectory model can significantly reduce the average waiting time and travel less distance to quickly find their next passengers, and the load balancing strategy significantly alleviates road loads. These objective measures can help us better understand spatio-temporal traffic patterns and guide taxi navigation.

  5. Geospatial Optimization of Siting Large-Scale Solar Projects

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  6. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  7. A convex optimization approach for solving large scale linear systems

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  8. Spatio-temporal scaling effects on longshore sediment transport pattern along the nearshore zone

    Khorram, Saeed; Ergil, Mustafa

    2018-03-01

    A measure of uncertainties, entropy has been employed in such different applications as coastal engineering probability inferences. Entropy sediment transport integration theories present novel visions in coastal analyses/modeling the application and development of which are still far-reaching. Effort has been made in the present paper to propose a method that needs an entropy-power index for spatio-temporal patterns analyses. Results have shown that the index is suitable for marine/hydrological ecosystem components analyses based on a beach area case study. The method makes use of six Makran Coastal monthly data (1970-2015) and studies variables such as spatio-temporal patterns, LSTR (long-shore sediment transport rate), wind speed, and wave height all of which are time-dependent and play considerable roles in terrestrial coastal investigations; the mentioned variables show meaningful spatio-temporal variability most of the time, but explanation of their combined performance is not easy. Accordingly, the use of an entropy-power index can show considerable signals that facilitate the evaluation of water resources and will provide an insight regarding hydrological parameters' interactions at scales as large as beach areas. Results have revealed that an STDDPI (entropy based spatio-temporal disorder dynamics power index) can simulate wave, long-shore sediment transport rate, and wind when granulometry, concentration, and flow conditions vary.

  9. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  10. Regulatory heterochronies and loose temporal scaling between sea star and sea urchin regulatory circuits.

    Gildor, Tsvia; Hinman, Veronica; Ben-Tabou-De-Leon, Smadar

    2017-01-01

    It has long been argued that heterochrony, a change in relative timing of a developmental process, is a major source of evolutionary innovation. Heterochronic changes of regulatory gene activation could be the underlying molecular mechanism driving heterochronic changes through evolution. Here, we compare the temporal expression profiles of key regulatory circuits between sea urchin and sea star, representative of two classes of Echinoderms that shared a common ancestor about 500 million years ago. The morphologies of the sea urchin and sea star embryos are largely comparable, yet, differences in certain mesodermal cell types and ectodermal patterning result in distinct larval body plans. We generated high resolution temporal profiles of 17 mesodermally-, endodermally- and ectodermally-expressed regulatory genes in the sea star, Patiria miniata, and compared these to their orthologs in the Mediterranean sea urchin, Paracentrotus lividus. We found that the maternal to zygotic transition is delayed in the sea star compared to the sea urchin, in agreement with the longer cleavage stage in the sea star. Interestingly, the order of gene activation shows the highest variation in the relatively diverged mesodermal circuit, while the correlations of expression dynamics are the highest in the strongly conserved endodermal circuit. We detected loose scaling of the developmental rates of these species and observed interspecies heterochronies within all studied regulatory circuits. Thus, after 500 million years of parallel evolution, mild heterochronies between the species are frequently observed and the tight temporal scaling observed for closely related species no longer holds.

  11. Temporal variation and scaling of parameters for a monthly hydrologic model

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  12. Temporal scale dependent interactions between multiple environmental disturbances in microcosm ecosystems.

    Garnier, Aurélie; Pennekamp, Frank; Lemoine, Mélissa; Petchey, Owen L

    2017-12-01

    Global environmental change has negative impacts on ecological systems, impacting the stable provision of functions, goods, and services. Whereas effects of individual environmental changes (e.g. temperature change or change in resource availability) are reasonably well understood, we lack information about if and how multiple changes interact. We examined interactions among four types of environmental disturbance (temperature, nutrient ratio, carbon enrichment, and light) in a fully factorial design using a microbial aquatic ecosystem and observed responses of dissolved oxygen saturation at three temporal scales (resistance, resilience, and return time). We tested whether multiple disturbances combine in a dominant, additive, or interactive fashion, and compared the predictability of dissolved oxygen across scales. Carbon enrichment and shading reduced oxygen concentration in the short term (i.e. resistance); although no other effects or interactions were statistically significant, resistance decreased as the number of disturbances increased. In the medium term, only enrichment accelerated recovery, but none of the other effects (including interactions) were significant. In the long term, enrichment and shading lengthened return times, and we found significant two-way synergistic interactions between disturbances. The best performing model (dominant, additive, or interactive) depended on the temporal scale of response. In the short term (i.e. for resistance), the dominance model predicted resistance of dissolved oxygen best, due to a large effect of carbon enrichment, whereas none of the models could predict the medium term (i.e. resilience). The long-term response was best predicted by models including interactions among disturbances. Our results indicate the importance of accounting for the temporal scale of responses when researching the effects of environmental disturbances on ecosystems. © 2017 The Authors. Global Change Biology Published by John Wiley

  13. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  14. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  15. Spatio-Temporal Video Object Segmentation via Scale-Adaptive 3D Structure Tensor

    Hai-Yun Wang

    2004-06-01

    Full Text Available To address multiple motions and deformable objects' motions encountered in existing region-based approaches, an automatic video object (VO segmentation methodology is proposed in this paper by exploiting the duality of image segmentation and motion estimation such that spatial and temporal information could assist each other to jointly yield much improved segmentation results. The key novelties of our method are (1 scale-adaptive tensor computation, (2 spatial-constrained motion mask generation without invoking dense motion-field computation, (3 rigidity analysis, (4 motion mask generation and selection, and (5 motion-constrained spatial region merging. Experimental results demonstrate that these novelties jointly contribute much more accurate VO segmentation both in spatial and temporal domains.

  16. LAMMPS strong scaling performance optimization on Blue Gene/Q

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  17. Optimization of scaled-up chitosan microparticles for bone regeneration

    Jayasuriya, A Champa; Bhat, Archana

    2009-01-01

    The aim of this study was to scale-up and optimize the chitosan (CS) microparticles (MPs) from 1x batch (41-85 mg) to 4x batch (270-567 mg) to be used in bone regeneration. The MPs used in the present study were prepared by double emulsification technique using CS as a base material under physiologically friendly conditions throughout the process. Structural integrity of MPs was improved creating cross-links between amine groups in CS and phosphate groups in tripolyphosphate (TPP) which has been used as an ionic cross-linking agent. The cross-linking density was varied using different amounts of TPP to CS such as 0%, 8%, 32%, 64% and 110% (w/w). The CS MPs were approximately spherical in shape with a size of 30-50 μm according to scanning electron microscopy results. X-ray diffraction data revealed having TPP in the CS MPs. The evidence of ionic cross-links in the CS MPs was analyzed using Fourier Transform Infra Red. When we scaled-up the yield of MPs, we investigated that 64% TPP cross-linking density provided the best quality MPs. In addition, those MPs provided the yield from 75 mg to 310 mg when scaled up from 1x to 4x batch, respectively. The MPs developed have a great potential to be used as an injectable scaffold for bone regeneration including orthopedic and craniofacial applications using minimally invasive conditions compared with conventional three-dimensional scaffolds.

  18. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  19. Estimating Gross Primary Production in Cropland with High Spatial and Temporal Scale Remote Sensing Data

    Lin, S.; Li, J.; Liu, Q.

    2018-04-01

    Satellite remote sensing data provide spatially continuous and temporally repetitive observations of land surfaces, and they have become increasingly important for monitoring large region of vegetation photosynthetic dynamic. But remote sensing data have their limitation on spatial and temporal scale, for example, higher spatial resolution data as Landsat data have 30-m spatial resolution but 16 days revisit period, while high temporal scale data such as geostationary data have 30-minute imaging period, which has lower spatial resolution (> 1 km). The objective of this study is to investigate whether combining high spatial and temporal resolution remote sensing data can improve the gross primary production (GPP) estimation accuracy in cropland. For this analysis we used three years (from 2010 to 2012) Landsat based NDVI data, MOD13 vegetation index product and Geostationary Operational Environmental Satellite (GOES) geostationary data as input parameters to estimate GPP in a small region cropland of Nebraska, US. Then we validated the remote sensing based GPP with the in-situ measurement carbon flux data. Results showed that: 1) the overall correlation between GOES visible band and in-situ measurement photosynthesis active radiation (PAR) is about 50 % (R2 = 0.52) and the European Center for Medium-Range Weather Forecasts ERA-Interim reanalysis data can explain 64 % of PAR variance (R2 = 0.64); 2) estimating GPP with Landsat 30-m spatial resolution data and ERA daily meteorology data has the highest accuracy(R2 = 0.85, RMSE MODIS 1-km NDVI/EVI product import; 3) using daily meteorology data as input for GPP estimation in high spatial resolution data would have higher relevance than 8-day and 16-day input. Generally speaking, using the high spatial resolution and high frequency satellite based remote sensing data can improve GPP estimation accuracy in cropland.

  20. The use of census migration data to approximate human movement patterns across temporal scales.

    Amy Wesolowski

    Full Text Available Human movement plays a key role in economies and development, the delivery of services, and the spread of infectious diseases. However, it remains poorly quantified partly because reliable data are often lacking, particularly for low-income countries. The most widely available are migration data from human population censuses, which provide valuable information on relatively long timescale relocations across countries, but do not capture the shorter-scale patterns, trips less than a year, that make up the bulk of human movement. Census-derived migration data may provide valuable proxies for shorter-term movements however, as substantial migration between regions can be indicative of well connected places exhibiting high levels of movement at finer time scales, but this has never been examined in detail. Here, an extensive mobile phone usage data set for Kenya was processed to extract movements between counties in 2009 on weekly, monthly, and annual time scales and compared to data on change in residence from the national census conducted during the same time period. We find that the relative ordering across Kenyan counties for incoming, outgoing and between-county movements shows strong correlations. Moreover, the distributions of trip durations from both sources of data are similar, and a spatial interaction model fit to the data reveals the relationships of different parameters over a range of movement time scales. Significant relationships between census migration data and fine temporal scale movement patterns exist, and results suggest that census data can be used to approximate certain features of movement patterns across multiple temporal scales, extending the utility of census-derived migration data.

  1. The use of census migration data to approximate human movement patterns across temporal scales.

    Wesolowski, Amy; Buckee, Caroline O; Pindolia, Deepa K; Eagle, Nathan; Smith, David L; Garcia, Andres J; Tatem, Andrew J

    2013-01-01

    Human movement plays a key role in economies and development, the delivery of services, and the spread of infectious diseases. However, it remains poorly quantified partly because reliable data are often lacking, particularly for low-income countries. The most widely available are migration data from human population censuses, which provide valuable information on relatively long timescale relocations across countries, but do not capture the shorter-scale patterns, trips less than a year, that make up the bulk of human movement. Census-derived migration data may provide valuable proxies for shorter-term movements however, as substantial migration between regions can be indicative of well connected places exhibiting high levels of movement at finer time scales, but this has never been examined in detail. Here, an extensive mobile phone usage data set for Kenya was processed to extract movements between counties in 2009 on weekly, monthly, and annual time scales and compared to data on change in residence from the national census conducted during the same time period. We find that the relative ordering across Kenyan counties for incoming, outgoing and between-county movements shows strong correlations. Moreover, the distributions of trip durations from both sources of data are similar, and a spatial interaction model fit to the data reveals the relationships of different parameters over a range of movement time scales. Significant relationships between census migration data and fine temporal scale movement patterns exist, and results suggest that census data can be used to approximate certain features of movement patterns across multiple temporal scales, extending the utility of census-derived migration data.

  2. Optimal temporal windows and dose-reducing strategy for coronary artery bypass graft imaging with 256-slice CT

    Lu, Kun-Mu [Department of Radiology, Shin Kong Wu Ho-Su Memorial Hospital, 95 Wen Chang Road, Shih Lin District, Taipei 111, Taiwan. (China); Lee, Yi-Wei [Department of Radiology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan (China); Department of Biomedical Imaging and Radiological Sciences, National Yang Ming University, Taipei, Taiwan (China); Guan, Yu-Xiang [Department of Biomedical Imaging and Radiological Sciences, National Yang Ming University, Taipei, Taiwan (China); Chen, Liang-Kuang [Department of Radiology, Shin Kong Wu Ho-Su Memorial Hospital, 95 Wen Chang Road, Shih Lin District, Taipei 111, Taiwan. (China); School of Medicine, Fu Jen Catholic University, Taipei, Taiwan (China); Law, Wei-Yip, E-mail: m002325@ms.skh.org.tw [Department of Radiology, Shin Kong Wu Ho-Su Memorial Hospital, 95 Wen Chang Road, Shih Lin District, Taipei 111, Taiwan. (China); Su, Chen-Tau, E-mail: m005531@ms.skh.org.tw [Department of Radiology, Shin Kong Wu Ho-Su Memorial Hospital, 95 Wen Chang Road, Shih Lin District, Taipei 111, Taiwan. (China); School of Medicine, Fu Jen Catholic University, Taipei, Taiwan (China)

    2013-12-11

    Objective: To determine the optimal image reconstruction windows in the assessment of coronary artery bypass grafts (CABGs) with 256-slice computed tomography (CT), and to assess their associated optimal pulsing windows for electrocardiogram-triggered tube current modulation (ETCM). Methods: We recruited 18 patients (three female; mean age 68.9 years) having mean heart rate (HR) of 66.3 beats per minute (bpm) and a heart rate variability of 1.3 bpm for this study. A total of 36 CABGs with 168 segments were evaluated, including 12 internal mammary artery (33.3%) and 24 saphenous vein grafts (66.7%). We reconstructed 20 data sets in 5%-step through 0–95% of the R–R interval. The image quality of CABGs was assessed by a 5-point scale (1=excellent to 5=non-diagnostic) for each segment (proximal anastomosis, proximal, middle, distal course of graft body, and distal anastomosis). Two reviewers discriminated optimal reconstruction intervals for each CABG segment in each temporal window. Optimal windows for ETCM were also evaluated. Results: The determined optimal systolic and diastolic reconstruction intervals could be divided into 2 groups with threshold HR=68. The determined best reconstruction intervals for low heart rate (HR<68) and high heart rate (HR>68) were 76.0±2.5% and 45.0±0% respectively. Average image quality scores were 1.7±0.6 with good inter-observer agreement (Kappa=0.79). Image quality was significantly better for saphenous vein grafts versus arterial grafts (P<0.001). The recommended windows of ETCM for low HR, high HR and all HR groups were 40–50%, 71–81% and 40–96% of R-R interval, respectively. The corresponding dose savings were about 60.8%, 58.7% and 22.7% in that order. Conclusions: We determined optimal reconstruction intervals and ETCM windows representing a good compromise between radiation and image quality for following bypass surgery using a 256-slice CT.

  3. Optimal Combustion Conditions for a Small-scale Biomass Boiler

    Viktor Plaček

    2012-01-01

    Full Text Available This paper reports on an attempt to achieve maximum efficiency and lowest possible emissions for a small-scale biomass boiler. This aim can be attained only by changing the control algorithm of the boiler, and in this way not raising the acquisition costs for the boiler. This paper describes the experimental facility, the problems that arose while establishing the facility, and how we have dealt with them. The focus is on discontinuities arising after periodic grate sweeping, and on finding the parameters of the PID control loops. Alongside these methods, which need a lambda probe signal for proper functionality, we inroduce another method, which is able to intercept the optimal combustion working point without the need to use a lambda sensor.

  4. Optimization of Iter with Iter-89P scaling

    Johner, J.

    1991-10-01

    Ignition in the ITER baseline machine is studied in the frame of a 1/2-D model using the ITER-89P scaling of the energy confinement time. The required value of the enhancement factor f L with respect to the L-mode, allowing ignition with a total fusion power of 1100 MW, is found to be 1.9 at an optimum operating temperature of 11 keV. A sensitivity analysis shows that the critical f L =2 value can be exceeded with relatively small changes in the physical assumptions. It is concluded that the safety margin is not sufficient for this project. Optimization of a thermonuclear plasma in a tokamak is then performed with constraints of given maximum magnetic field B in the superconducting windings, given distance between the plasma and the maximum magnetic field point, imposed safety factor at the plasma edge, and given averaged neutron flux at the plasma surface. The minimum enhancement factor f L with respect to the L-mode, allowing ignition at a given value of the total fusion power P fus , is only a function of the torus aspect ratio A. Taking the ITER reference values for the above constraints, the required value of f L is practically independent of the aspect ratio but can be sensibly improved by increasing the total fusion power P fus . With P fus =1700 MW, a reasonable safety margin (f L ≅ 1.5) is obtained. Analytical expressions of the conditions resulting from the above optimization are also derived for an arbitrary monomial scaling of the energy confinement time, and shown to give excellent agreement with the numerical results

  5. Genome-scale modelling of microbial metabolism with temporal and spatial resolution.

    Henson, Michael A

    2015-12-01

    Most natural microbial systems have evolved to function in environments with temporal and spatial variations. A major limitation to understanding such complex systems is the lack of mathematical modelling frameworks that connect the genomes of individual species and temporal and spatial variations in the environment to system behaviour. The goal of this review is to introduce the emerging field of spatiotemporal metabolic modelling based on genome-scale reconstructions of microbial metabolism. The extension of flux balance analysis (FBA) to account for both temporal and spatial variations in the environment is termed spatiotemporal FBA (SFBA). Following a brief overview of FBA and its established dynamic extension, the SFBA problem is introduced and recent progress is described. Three case studies are reviewed to illustrate the current state-of-the-art and possible future research directions are outlined. The author posits that SFBA is the next frontier for microbial metabolic modelling and a rapid increase in methods development and system applications is anticipated. © 2015 Authors; published by Portland Press Limited.

  6. Groundwater-fed irrigation impacts spatially distributed temporal scaling behavior of the natural system: a spatio-temporal framework for understanding water management impacts

    Condon, Laura E; Maxwell, Reed M

    2014-01-01

    Regional scale water management analysis increasingly relies on integrated modeling tools. Much recent work has focused on groundwater–surface water interactions and feedbacks. However, to our knowledge, no study has explicitly considered impacts of management operations on the temporal dynamics of the natural system. Here, we simulate twenty years of hourly moisture dependent, groundwater-fed irrigation using a three-dimensional, fully integrated, hydrologic model (ParFlow-CLM). Results highlight interconnections between irrigation demand, groundwater oscillation frequency and latent heat flux variability not previously demonstrated. Additionally, the three-dimensional model used allows for novel consideration of spatial patterns in temporal dynamics. Latent heat flux and water table depth both display spatial organization in temporal scaling, an important finding given the spatial homogeneity and weak scaling observed in atmospheric forcings. Pumping and irrigation amplify high frequency (sub-annual) variability while attenuating low frequency (inter-annual) variability. Irrigation also intensifies scaling within irrigated areas, essentially increasing temporal memory in both the surface and the subsurface. These findings demonstrate management impacts that extend beyond traditional water balance considerations to the fundamental behavior of the system itself. This is an important step to better understanding groundwater’s role as a buffer for natural variability and the impact that water management has on this capacity. (paper)

  7. Temporal variation of optimal UV exposure time over Korea: risks and benefits of surface UV radiation

    Lee, Y. G.; Koo, J. H.

    2015-12-01

    Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) over Korea during 2004-2012. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied in estimating the optimal UV exposure time. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.

  8. Assessing Temporal Stability for Coarse Scale Satellite Moisture Validation in the Maqu Area, Tibet

    Bhatti, Haris Akram; Rientjes, Tom; Verhoef, Wouter; Yaseen, Muhammad

    2013-01-01

    This study evaluates if the temporal stability concept is applicable to a time series of satellite soil moisture images so to extend the common procedure of satellite image validation. The area of study is the Maqu area, which is located in the northeastern part of the Tibetan plateau. The network serves validation purposes of coarse scale (25–50 km) satellite soil moisture products and comprises 20 stations with probes installed at depths of 5, 10, 20, 40, 80 cm. The study period is 2009. The temporal stability concept is applied to all five depths of the soil moisture measuring network and to a time series of satellite-based moisture products from the Advance Microwave Scanning Radiometer (AMSR-E). The in-situ network is also assessed by Pearsons's correlation analysis. Assessments by the temporal stability concept proved to be useful and results suggest that probe measurements at 10 cm depth best match to the satellite observations. The Mean Relative Difference plot for satellite pixels shows that a RMSM pixel can be identified but in our case this pixel does not overlay any in-situ station. Also, the RMSM pixel does not overlay any of the Representative Mean Soil Moisture (RMSM) stations of the five probe depths. Pearson's correlation analysis on in-situ measurements suggests that moisture patterns over time are more persistent than over space. Since this study presents first results on the application of the temporal stability concept to a series of satellite images, we recommend further tests to become more conclusive on effectiveness to broaden the procedure of satellite validation. PMID:23959237

  9. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  10. Variation in the macrofaunal community over large temporal and spatial scales in the southern Yellow Sea

    Xu, Yong; Sui, Jixing; Yang, Mei; Sun, Yue; Li, Xinzheng; Wang, Hongfa; Zhang, Baolin

    2017-09-01

    To detect large, temporal- and spatial-scale variations in the macrofaunal community in the southern Yellow Sea, data collected along the western, middle and eastern regions of the southern Yellow Sea from 1958 to 2014 were organized and analyzed. Statistical methods such as cluster analysis, non-metric multidimensional scaling ordination (nMDS), permutational multivariate analysis of variance (PERMANOVA), redundancy analysis (RDA) and canonical correspondence analysis (CCA) were applied. The abundance of polychaetes increased in the western region but decreased in the eastern region from 1958 to 2014, whereas the abundance of echinoderms showed an opposite trend. For the entire macrofaunal community, Margalef's richness (d), the Shannon-Wiener index (H‧) and Pielou's evenness (J‧) were significantly lower in the eastern region when compared with the other two regions. No significant temporal differences were found for d and H‧, but there were significantly lower values of J‧ in 2014. Considerable variation in the macrofaunal community structure over the past several decades and among the geographical regions at the species, genus and family levels were observed. The species, genera and families that contributed to the temporal variation in each region were also identified. The most conspicuous pattern was the increase in the species Ophiura sarsii vadicola in the eastern region. In the western region, five polychaetes (Ninoe palmata, Notomastus latericeus, Paralacydonia paradoxa, Paraprionospio pinnata and Sternaspis scutata) increased consistently from 1958 to 2014. The dominance curves showed that both the species diversity and the dominance patterns were relatively stable in the western and middle regions. Environmental parameters such as depth, temperature and salinity could only partially explain the observed biological variation in the southern Yellow Sea. Anthropogenic activities such as demersal fishing and other unmeasured environmental variables

  11. Multidimensional scaling of emotional responses to music in patients with temporal lobe resection.

    Dellacherie, D; Bigand, E; Molin, P; Baulac, M; Samson, S

    2011-10-01

    The present study investigated emotional responses to music by using multidimensional scaling (MDS) analysis in patients with right or left medial temporal lobe (MTL) lesions and matched normal controls (NC). Participants were required to evaluate emotional dissimilarities of nine musical excerpts that were selected to express graduated changes along the valence and arousal dimensions. For this purpose, they rated dissimilarity between pairs of stimuli on an eight-point scale and the resulting matrices were submitted to an MDS analysis. The results showed that patients did not differ from NC participants in evaluating emotional feelings induced by the musical excerpts, suggesting that all participants were able to distinguish refined emotions. We concluded that the ability to detect and use emotional valence and arousal when making dissimilarity judgments was not strongly impaired by a right or left MTL lesion. This finding has important clinical implications and is discussed in light of current neuropsychological studies on emotion. It suggests that emotional responses to music can be at least partially preserved at a non-verbal level in patients with unilateral temporal lobe damage including the amygdala. Copyright © 2011 Elsevier Srl. All rights reserved.

  12. Mechanisms Causing Hypoxia in the Baltic Sea at Different Spatial and Temporal Scales

    Conley, D. J.; Carstensen, J.; Gustafsson, B.; Slomp, C. P.

    2016-02-01

    A number of synthesis efforts have documented the world-wide increase in hypoxia, which is primarily driven by nutrient inputs with consequent organic matter enrichment. Physical factors including freshwater or saltwater inputs, stratification and temperature also play an important role in causing and sustaining hypoxia. The Baltic Sea provides an interesting case study to examine changes in oxygen dynamics over time because of the diversity of the types of hypoxia that occur, which ranges from episodic to seasonal hypoxia to perennial hypoxia. Hypoxia varies spatially across the basin with differences between open water bottoms and coastal systems. In addition, the extent and intensity of hypoxia has also varied greatly over the history of the basin, e.g. the last 8000 years. We will examine the mechanisms causing hypoxia at different spatial and temporal scales. The hydrodynamical setting is an important governing factor controlling possible time scales of hypoxia, but enhanced nutrient fluxes and global warming amplify oxygen depletion when oxygen supply by physical processes cannot meet oxygen demands from respiration. Our results indicate that climate change is counteracting management efforts to reduce hypoxia. We will address how hypoxia in the Baltic Sea is terminated at different scales. More importantly, we will explore the prospects of getting rid of hypoxia with the nutrient reductions that have been agreed upon by the countries in the Baltic Sea basin and discuss the time scales of improvement in bottom water oxygen conditions.

  13. Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection.

    Zhang, Minliang; Chen, Qian; Tao, Tianyang; Feng, Shijie; Hu, Yan; Li, Hui; Zuo, Chao

    2017-08-21

    Temporal phase unwrapping (TPU) is an essential algorithm in fringe projection profilometry (FPP), especially when measuring complex objects with discontinuities and isolated surfaces. Among others, the multi-frequency TPU has been proven to be the most reliable algorithm in the presence of noise. For a practical FPP system, in order to achieve an accurate, efficient, and reliable measurement, one needs to make wise choices about three key experimental parameters: the highest fringe frequency, the phase-shifting steps, and the fringe pattern sequence. However, there was very little research on how to optimize these parameters quantitatively, especially considering all three aspects from a theoretical and analytical perspective simultaneously. In this work, we propose a new scheme to determine simultaneously the optimal fringe frequency, phase-shifting steps and pattern sequence under multi-frequency TPU, robustly achieving high accuracy measurement by a minimum number of fringe frames. Firstly, noise models regarding phase-shifting algorithms as well as 3-D coordinates are established under a projector defocusing condition, which leads to the optimal highest fringe frequency for a FPP system. Then, a new concept termed frequency-to-frame ratio (FFR) that evaluates the magnitude of the contribution of each frame for TPU is defined, on which an optimal phase-shifting combination scheme is proposed. Finally, a judgment criterion is established, which can be used to judge whether the ratio between adjacent fringe frequencies is conducive to stably and efficiently unwrapping the phase. The proposed method provides a simple and effective theoretical framework to improve the accuracy, efficiency, and robustness of a practical FPP system in actual measurement conditions. The correctness of the derived models as well as the validity of the proposed schemes have been verified through extensive simulations and experiments. Based on a normal monocular 3-D FPP hardware system

  14. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  15. ESTIMATING GROSS PRIMARY PRODUCTION IN CROPLAND WITH HIGH SPATIAL AND TEMPORAL SCALE REMOTE SENSING DATA

    S. Lin

    2018-04-01

    Full Text Available Satellite remote sensing data provide spatially continuous and temporally repetitive observations of land surfaces, and they have become increasingly important for monitoring large region of vegetation photosynthetic dynamic. But remote sensing data have their limitation on spatial and temporal scale, for example, higher spatial resolution data as Landsat data have 30-m spatial resolution but 16 days revisit period, while high temporal scale data such as geostationary data have 30-minute imaging period, which has lower spatial resolution (> 1 km. The objective of this study is to investigate whether combining high spatial and temporal resolution remote sensing data can improve the gross primary production (GPP estimation accuracy in cropland. For this analysis we used three years (from 2010 to 2012 Landsat based NDVI data, MOD13 vegetation index product and Geostationary Operational Environmental Satellite (GOES geostationary data as input parameters to estimate GPP in a small region cropland of Nebraska, US. Then we validated the remote sensing based GPP with the in-situ measurement carbon flux data. Results showed that: 1 the overall correlation between GOES visible band and in-situ measurement photosynthesis active radiation (PAR is about 50 % (R2 = 0.52 and the European Center for Medium-Range Weather Forecasts ERA-Interim reanalysis data can explain 64 % of PAR variance (R2 = 0.64; 2 estimating GPP with Landsat 30-m spatial resolution data and ERA daily meteorology data has the highest accuracy(R2 = 0.85, RMSE < 3 gC/m2/day, which has better performance than using MODIS 1-km NDVI/EVI product import; 3 using daily meteorology data as input for GPP estimation in high spatial resolution data would have higher relevance than 8-day and 16-day input. Generally speaking, using the high spatial resolution and high frequency satellite based remote sensing data can improve GPP estimation accuracy in cropland.

  16. Controls for multi-scale temporal variation in methane flux of a subtropical tidal salt marsh

    Li, H.

    2016-12-01

    Coastal wetlands provide critical carbon sequestration benefits, yet the production of methane (CH4) from these ecosystems can vary by an order of magnitude based on environmental and biological factors. Eddy covariance (EC) measurements for methane flux (FCH4) were performed in a subtropical tidal salt marsh of eastern China over 20 months. Spectral analysis techniques including the continuous wavelet transform, the wavelet coherence, the partial wavelet coherence and the multiple wavelet coherence were employed to analyze the periodicities and the main regulating factors of FCH4 in the tidal salt marsh. The annual budget of methane was 17.8 g C-CH4 m-2 yr-1, which was relatively high compared to those of most reported from inland wetland sites. In non-growing season, release of ebullition was the dominant driving mechanism for variability of FCH4 from hourly to monthly scales. There was no single dominant factor at short-term scale (half-day to 1-day) in growing season. It is worthwhile to note that tide was one of the most important factors regulating FCH4 at short time scale (half-day to 1-day). In comparison, the contribution of temperature to FCH4 at a short time scale (half-day to 1-day) was small due to its narrow range. In addition, plant-modulated transport and gross primary production also contributed to FCH4 at multiple temporal scales in this densely vegetated marsh, especially at weekly to monthly scales. Due to the complex interactive influences of tidal dynamics, temperature fluctuation, plant productivity, plant-mediated transport and release of ebullition on FCH4 exhibited no clear pattern of diurnal variation, but instead was highly variable.

  17. Protein homology model refinement by large-scale energy optimization.

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  18. Logistic regression accuracy across different spatial and temporal scales for a wide-ranging species, the marbled murrelet

    Carolyn B. Meyer; Sherri L. Miller; C. John Ralph

    2004-01-01

    The scale at which habitat variables are measured affects the accuracy of resource selection functions in predicting animal use of sites. We used logistic regression models for a wide-ranging species, the marbled murrelet, (Brachyramphus marmoratus) in a large region in California to address how much changing the spatial or temporal scale of...

  19. Analysis of streamflow variability in Alpine catchments at multiple spatial and temporal scales

    Pérez Ciria, T.; Chiogna, G.

    2017-12-01

    Alpine watersheds play a pivotal role in Europe for water provisioning and for hydropower production. In these catchments, temporal fluctuations of river discharge occur at multiple temporal scales due to natural as well as anthropogenic driving forces. In the last decades, modifications of the flow regime have been observed and their origin lies in the complex interplay between construction of dams for hydro power production, changes in water management policies and climatic changes. The alteration of the natural flow has negative impacts on the freshwater biodiversity and threatens the ecosystem integrity of the Alpine region. Therefore, understanding the temporal and spatial variability of river discharge has recently become a particular concern for environmental protection and represents a crucial contribution to achieve sustainable water resources management in the Alps. In this work, time series analysis is conducted for selected gauging stations in the Inn and the Adige catchments, which cover a large part of the central and eastern region of the Alps. We analyze the available time series using the continuous wavelet transform and change-point analyses for determining how and where changes have taken place. Although both catchments belong to different climatic zones of the Greater Alpine Region, streamflow properties share some similar characteristics. The comparison of the collected streamflow time series in the two catchments permits detecting gradients in the hydrological system dynamics that depend on station elevation, longitudinal location in the Alps and catchment area. This work evidences that human activities (e.g., water management practices and flood protection measures, changes in legislation and market regulation) have major impacts on streamflow and should be rigorously considered in hydrological models.

  20. Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration

    Tsvetkova, Milena; García-Gavilanes, Ruth; Yasseri, Taha

    2016-11-01

    Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. In an attempt to understand the structural and temporal features of negative interactions in the community, we use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. We investigate how often and how fast pairs of reverts occur compared to a null model in order to control for patterns that are natural to the content production or are due to the internal rules of Wikipedia. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Even though the individual reverts might not necessarily be negative social interactions, our analysis points to the existence of certain patterns of negative social dynamics within the community of editors. Some of these patterns have not been previously explored and carry implications for the knowledge collection practice conducted on Wikipedia. Our method can be applied to other large-scale temporal collaboration networks to identify the existence of negative social interactions and other social processes.

  1. Scale invariance properties of intracerebral EEG improve seizure prediction in mesial temporal lobe epilepsy.

    Kais Gadhoumi

    Full Text Available Although treatment for epilepsy is available and effective for nearly 70 percent of patients, many remain in need of new therapeutic approaches. Predicting the impending seizures in these patients could significantly enhance their quality of life if the prediction performance is clinically practical. In this study, we investigate the improvement of the performance of a seizure prediction algorithm in 17 patients with mesial temporal lobe epilepsy by means of a novel measure. Scale-free dynamics of the intracerebral EEG are quantified through robust estimates of the scaling exponents--the first cumulants--derived from a wavelet leader and bootstrap based multifractal analysis. The cumulants are investigated for the discriminability between preictal and interictal epochs. The performance of our recently published patient-specific seizure prediction algorithm is then out-of-sample tested on long-lasting data using combinations of cumulants and state similarity measures previously introduced. By using the first cumulant in combination with state similarity measures, up to 13 of 17 patients had seizures predicted above chance with clinically practical levels of sensitivity (80.5% and specificity (25.1% of total time under warning for prediction horizons above 25 min. These results indicate that the scale-free dynamics of the preictal state are different from those of the interictal state. Quantifiers of these dynamics may carry a predictive power that can be used to improve seizure prediction performance.

  2. Optimization of the temporal pattern of applied dose for a single fraction of radiation: Implications for radiation therapy

    Altman, Michael B.

    The increasing prevalence of intensity modulated radiation therapy (IMRT) as a treatment modality has led to a renewed interest in the potential for interaction between prolonged treatment time, as frequently associated with IMRT, and the underlying radiobiology of the irradiated tissue. A particularly relevant aspect of radiobiology is cell repair capacity, which influences cell survival, and thus directly relates to the ability to control tumors and spare normal tissues. For a single fraction of radiation, the linear quadratic (LQ) model is commonly used to relate the radiation dose to the fraction of cells surviving. The LQ model implies a dependence on two time-related factors which correlate to radiobiological effects: the duration of radiation application, and the functional form of how the dose is applied over that time (the "temporal pattern of applied dose"). Although the former has been well studied, the latter has not. Thus, the goal of this research is to investigate the impact of the temporal pattern of applied dose on the survival of human cells and to explore how the manipulation of this temporal dose pattern may be incorporated into an IMRT-based radiation therapy treatment planning scheme. The hypothesis is that the temporal pattern of applied dose in a single fraction of radiation can be optimized to maximize or minimize cell kill. Furthermore, techniques which utilize this effect could have clinical ramifications. In situations where increased cell kill is desirable, such as tumor control, or limiting the degree of cell kill is important, such as the sparing of normal tissue, temporal sequences of dose which maximize or minimize cell kill (temporally "optimized" sequences) may provide greater benefit than current clinically used radiation patterns. In the first part of this work, an LQ-based modeling analysis of effects of the temporal pattern of dose on cell kill is performed. Through this, patterns are identified for maximizing cell kill for a

  3. Evaluation of Aquarius Version-5 Sea Surface Salinity on various spatial and temporal scales

    Lee, T.

    2017-12-01

    Sea surface salinity (SSS) products from Aquarius have had three public releases with progressive improvement in data quality: Versions 2, 3, and 4, with the last one being released in October 2015. A systematic assessment of the Version-4, Level-3 Aquarius SSS product was performed on various spatial and temporal scales by comparing it with gridded Argo products (Lee 2016, Geophys. Res. Lett.). The comparison showed that the consistency of Aquarius Version-4 SSS with gridded Argo products is comparable to that between two different gridded Argo products. However, significant seasonal biases remain in high-latitude oceans. Further improvements are being made by the Aquarius team. Aquarius Version 5.0 SSS is scheduled to be released in October 2017 as the final version of the Aquarius Project. This presentation provides a similar evaluation of Version-5 SSS as reported by Lee (2016) and contrast it with the current Version-4 SSS.

  4. A Grouping Particle Swarm Optimizer with Personal-Best-Position Guidance for Large Scale Optimization.

    Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi

    2017-05-04

    Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.

  5. Risk assessment of flood disaster and forewarning model at different spatial-temporal scales

    Zhao, Jun; Jin, Juliang; Xu, Jinchao; Guo, Qizhong; Hang, Qingfeng; Chen, Yaqian

    2018-05-01

    Aiming at reducing losses from flood disaster, risk assessment of flood disaster and forewarning model is studied. The model is built upon risk indices in flood disaster system, proceeding from the whole structure and its parts at different spatial-temporal scales. In this study, on the one hand, it mainly establishes the long-term forewarning model for the surface area with three levels of prediction, evaluation, and forewarning. The method of structure-adaptive back-propagation neural network on peak identification is used to simulate indices in prediction sub-model. Set pair analysis is employed to calculate the connection degrees of a single index, comprehensive index, and systematic risk through the multivariate connection number, and the comprehensive assessment is made by assessment matrixes in evaluation sub-model. The comparison judging method is adopted to divide warning degree of flood disaster on risk assessment comprehensive index with forewarning standards in forewarning sub-model and then the long-term local conditions for proposing planning schemes. On the other hand, it mainly sets up the real-time forewarning model for the spot, which introduces the real-time correction technique of Kalman filter based on hydrological model with forewarning index, and then the real-time local conditions for presenting an emergency plan. This study takes Tunxi area, Huangshan City of China, as an example. After risk assessment and forewarning model establishment and application for flood disaster at different spatial-temporal scales between the actual and simulated data from 1989 to 2008, forewarning results show that the development trend for flood disaster risk remains a decline on the whole from 2009 to 2013, despite the rise in 2011. At the macroscopic level, project and non-project measures are advanced, while at the microcosmic level, the time, place, and method are listed. It suggests that the proposed model is feasible with theory and application, thus

  6. Large-scale brain networks are distinctly affected in right and left mesial temporal lobe epilepsy.

    de Campos, Brunno Machado; Coan, Ana Carolina; Lin Yasuda, Clarissa; Casseb, Raphael Fernandes; Cendes, Fernando

    2016-09-01

    Mesial temporal lobe epilepsy (MTLE) with hippocampus sclerosis (HS) is associated with functional and structural alterations extending beyond the temporal regions and abnormal pattern of brain resting state networks (RSNs) connectivity. We hypothesized that the interaction of large-scale RSNs is differently affected in patients with right- and left-MTLE with HS compared to controls. We aimed to determine and characterize these alterations through the analysis of 12 RSNs, functionally parceled in 70 regions of interest (ROIs), from resting-state functional-MRIs of 99 subjects (52 controls, 26 right- and 21 left-MTLE patients with HS). Image preprocessing and statistical analysis were performed using UF(2) C-toolbox, which provided ROI-wise results for intranetwork and internetwork connectivity. Intranetwork abnormalities were observed in the dorsal default mode network (DMN) in both groups of patients and in the posterior salience network in right-MTLE. Both groups showed abnormal correlation between the dorsal-DMN and the posterior salience, as well as between the dorsal-DMN and the executive-control network. Patients with left-MTLE also showed reduced correlation between the dorsal-DMN and visuospatial network and increased correlation between bilateral thalamus and the posterior salience network. The ipsilateral hippocampus stood out as a central area of abnormalities. Alterations on left-MTLE expressed a low cluster coefficient, whereas the altered connections on right-MTLE showed low cluster coefficient in the DMN but high in the posterior salience regions. Both right- and left-MTLE patients with HS have widespread abnormal interactions of large-scale brain networks; however, all parameters evaluated indicate that left-MTLE has a more intricate bihemispheric dysfunction compared to right-MTLE. Hum Brain Mapp 37:3137-3152, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by

  7. Evaluating complementary networks of restoration plantings for landscape-scale occurrence of temporally dynamic species.

    Ikin, Karen; Tulloch, Ayesha; Gibbons, Philip; Ansell, Dean; Seddon, Julian; Lindenmayer, David

    2016-10-01

    Multibillion dollar investments in land restoration make it critical that conservation goals are achieved cost-effectively. Approaches developed for systematic conservation planning offer opportunities to evaluate landscape-scale, temporally dynamic biodiversity outcomes from restoration and improve on traditional approaches that focus on the most species-rich plantings. We investigated whether it is possible to apply a complementarity-based approach to evaluate the extent to which an existing network of restoration plantings meets representation targets. Using a case study of woodland birds of conservation concern in southeastern Australia, we compared complementarity-based selections of plantings based on temporally dynamic species occurrences with selections based on static species occurrences and selections based on ranking plantings by species richness. The dynamic complementarity approach, which incorporated species occurrences over 5 years, resulted in higher species occurrences and proportion of targets met compared with the static complementarity approach, in which species occurrences were taken at a single point in time. For equivalent cost, the dynamic complementarity approach also always resulted in higher average minimum percent occurrence of species maintained through time and a higher proportion of the bird community meeting representation targets compared with the species-richness approach. Plantings selected under the complementarity approaches represented the full range of planting attributes, whereas those selected under the species-richness approach were larger in size. Our results suggest that future restoration policy should not attempt to achieve all conservation goals within individual plantings, but should instead capitalize on restoration opportunities as they arise to achieve collective value of multiple plantings across the landscape. Networks of restoration plantings with complementary attributes of age, size, vegetation structure, and

  8. Rainfall Erosivity Database on the European Scale (REDES): A product of a high temporal resolution rainfall data collection in Europe

    Panagos, Panos; Ballabio, Cristiano; Borrelli, Pasquale; Meusburger, Katrin; Alewell, Christine

    2016-04-01

    Mediterranean area. This spatio-temporal analysis of rainfall erosivity at European scale is very important for policy makers and farmers for soil conservation, optimization of agricultural land use and natural hazards prediction. REDES is also used in combination with future rainfall data from WorldClim to run climate change scenarios. The projection of REDES combined with climate change scenarios (HADGEM2, RCP4.5) and using a robust geo-statistical model resulted in a 10-20% increase of the R-factor in Europe till 2050.

  9. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  10. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  11. Mathematical programming methods for large-scale topology optimization problems

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  12. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms

  13. Assessments of Drought Impacts on Vegetation in China with the Optimal Time Scales of the Climatic Drought Index

    Zheng Li

    2015-07-01

    Full Text Available Drought is expected to increase in frequency and severity due to global warming, and its impacts on vegetation are typically extensively evaluated with climatic drought indices, such as multi-scalar Standardized Precipitation Evapotranspiration Index (SPEI. We analyzed the covariation between the SPEIs of various time scales and the anomalies of the normalized difference vegetation index (NDVI, from which the vegetation type-related optimal time scales were retrieved. The results indicated that the optimal time scales of needle-leaved forest, broadleaf forest and shrubland were between 10 and 12 months, which were considerably longer than the grassland, meadow and cultivated vegetation ones (2 to 4 months. When the optimal vegetation type-related time scales were used, the SPEI could better reflect the vegetation’s responses to water conditions, with the correlation coefficients between SPEIs and NDVI anomalies increased by 5.88% to 28.4%. We investigated the spatio-temporal characteristics of drought and quantified the different responses of vegetation growth to drought during the growing season (April–October. The results revealed that the frequency of drought has increased in the 21st century with the drying trend occurring in most of China. These results are useful for ecological assessments and adapting management steps to mitigate the impact of drought on vegetation. They are helpful to employ water resources more efficiently and reduce potential damage to human health caused by water shortages.

  14. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  15. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  16. Use of soil moisture dynamics and patterns at different spatio-temporal scales for the investigation of subsurface flow processes

    T. Blume

    2009-07-01

    Full Text Available Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale and binary indicator maps (for the long-term and hillslope scale. Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to

  17. Human-induced C erosion and burial across spatial and temporal scales. (Invited)

    van oost, K.

    2013-12-01

    Anthropogenic land cover change and soil erosion are tightly coupled: accelerated erosion and deposition of soil are inevitable consequences of the removal of vegetative cover and increased exposure of the soil surface to erosion. A significant portion of the earth surface has now been disturbed and this has locally accelerated erosion 10- to 100-fold. Although there is now growing awareness that the erosion-induced C flux may be an important factor determining global and regional net terrestrial ecosystem C balances, the significance of this disturbance for the past, present and future C cycle remains uncertain. In this paper, we argue that the significance for both past and present C budgets remains poorly quantified due to uncertainty about the contribution of biotic versus erosion-induced C fluxes because of their intrinsically different space and time scales. Carbon erosion research in agro-ecosystems has traditionally focused on short-term processes, i.e. single events to a few decades and longer-term observations of C and sediment dynamics are therefore rare. Likewise, C cycling is typically studied at the profile-scale while erosion processes operate over various spatial scales and whole-watershed approaches are therefore needed. We address this issue here by synthesizing 3 case studies where we report results of a measurement campaign to characterize the erosional control on vertical carbon fluxes from degraded land. First, using signatures in soil sedimentary archives, we integrate the effects of accelerated C erosion across point, hillslope and catchment scale for a temperate river catchment over the period of agriculture to demonstrate that accounting for the non-steady-state C dynamics in geomorphic active systems is pertinent to understand both past and future anthropogenic global change. Secondly, we report year-round soil respiration measurements with high temporal resolution along an erosion gradient on cultivated sloping land in the Chinese Loess

  18. Spatial and temporal distribution of onroad CO2 emissions at the Urban spatial scale

    Song, Y.; Gurney, K. R.; Zhou, Y.; Mendoza, D. L.

    2011-12-01

    The Hestia Project is a multi-disciplinary effort to help better understand the spatial and temporal distribution of fossil fuel carbon dioxide (CO2) emission at urban scale. Onroad transportation is an essential source of CO2 emissions. This study examines two urban domains: Marion County (Indianapolis) and Los Angeles County and explores the methods and results associated with the spatial and temporal distribution of local urban onroad CO2 emissions. We utilize a bottom-up approach and spatially distribute county emissions based on the Annual Average Daily Traffic (AADT) counts provided by local Department of Transportation. The total amount of CO2 emissions is calculated by the National Mobile Inventory Model (NMIM) for Marion County and the EMission FACtors (EMFAC) model for Los Angeles County. The NMIM model provides CO2 emissions based on vehicle miles traveled (VMT) data at the county-level from the national county database (NCD). The EMFAC model provides CO2 emissions for California State based on vehicle activities, including VMT, vehicle population and fuel types. A GIS road atlas is retrieved from the US Census Bureau. Further spatial analysis and integration are performed by GIS software to distribute onroad CO2 emission according to the traffic volume. The temporal allocation of onroad CO2 emission is based on the hourly traffic data obtained from the Metropolitan Planning Orgnizations (MPO) for Marion County and Department of Transportation for Los Angeles County. The annual CO2 emissions are distributed according to each hourly fraction of traffic counts. Due to the fact that ATR stations are unevenly distributed in space, we create Thiessen polygons such that each road segment is linked to the nearest neighboring ATR station. The hourly profile for each individual station is then combined to create a "climatology" of CO2 emissions in time on each road segment. We find that for Marion County in the year 2002, urban interstate and arterial roads have

  19. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we

  20. Usefulness of medial temporal lobe atrophy visual rating scale in detecting Alzheimer′s disease: Preliminary study

    Jae-Hyeok Heo

    2013-01-01

    Full Text Available Background: The Korean version of Mini-Mental Status Examination (K-MMSE and the Korean version of Addenbrooke Cognitive Examination (K-ACE have been validated as quick neuropsychological tests for screening dementia in various clinical settings. Medial temporal atrophy (MTA is an early pathological characteristic of Alzheimer′s disease (AD. We aimed to assess the diagnostic validity of the fusion of the neuropsychological test and visual rating scale (VRS of MTA in AD. Materials and Methods: A total of fifty subjects (25 AD, 25 controls were included. The neuropsychological tests used were the K-MMSE and the K-ACE. T1 axial imaging visual rating scale (VRS was applied for assessing the grade of MTA. We calculated the fusion score with the difference of neuropsychological test and VRS of MTA. The receiver operating characteristics (ROC curve was used to determine optimal cut-off score, sensitivity and specificity of the fusion scores in screening AD. Results: No significant differences in age, gender and education were found between AD and control group. The values of K-MMSE, K-ACE, CDR, VRS and cognitive function test minus VRS were significantly lower in the AD group than control group. The AUC (Area under the curve, sensitivity and specificity for K-MMSE minus VRS were 0.857, 84% and 80% and for K-ACE minus VRS were 0.884, 80% and 88%, respectively. Those for K-MMSE only were 0.842, 76% and 72% and for K-ACE only were 0.868, 80% and 88%, respectively. Conclusions: The fusion of the neuropsychological test and VRS suggested clinical usefulness in their easy and superiority over neuropsychological test only. However, this study failed to find any difference. This may be because of small numbers in the study or because there is no true difference.

  1. Length scale and manufacturability in density-based topology optimization

    Lazarov, Boyan Stefanov; Wang, Fengwen; Sigmund, Ole

    2016-01-01

    Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can...... provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design...

  2. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    Ghattas, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  3. Microservice scaling optimization based on metric collection in Kubernetes

    Blažej, Aljaž

    2017-01-01

    As web applications become more complex and the number of internet users rises, so does the need to optimize the use of hardware supporting these applications. Optimization can be achieved with microservices, as they offer several advantages compared to the monolithic approach, such as better utilization of resources, scalability and isolation of different parts of an application. Another important part is collecting metrics, since they can be used for analysis and debugging as well as the ba...

  4. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  5. Minimum scale controlled topology optimization and experimental test of a micro thermal actuator

    Heo, S.; Yoon, Gil Ho; Kim, Y.Y.

    2008-01-01

    This paper is concerned with the optimal topology design, fabrication and test of a micro thermal actuator. Because the minimum scale was controlled during the design optimization process, the production yield rate of the actuator was improved considerably; alternatively, the optimization design ...... tested. The test showed that control over the minimum length scale in the design process greatly improves the yield rate and reduces the performance deviation....... without scale control resulted in a very low yield rate. Using the minimum scale controlling topology design method developed earlier by the authors, micro thermal actuators were designed and fabricated through a MEMS process. Moreover, both their performance and production yield were experimentally...

  6. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  7. Extreme value statistics for annual minimum and trough-under-treshold precipitation at different, spatio-temporal scales

    Booij, Martijn J.; de Wit, Marcel J.M.

    2010-01-01

    The aim of this paper is to quantify meteorological droughts and assign return periods to these droughts. Moreover, the relation between meteorological and hydrological droughts is explored. This has been done for the River Meuse basin in Western Europe at different spatial and temporal scales to

  8. Large temporal scale and capacity subsurface bulk energy storage with CO2

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  9. Small scale temporal variability in the phytoplankton of Independencia Bay, Pisco, Perú

    Noemí Ochoa

    2013-06-01

    Full Text Available Temporal variations at small scale of the coastal marine phytoplankton assemblages were studied. Water samples were collected at a fixed station in Bahia Independencia (Pisco-Peru. The sampling took place in the morning (08:00 h. and afternoon (15:00 h over a period of 29 days (March 28 to April 25, 1988. Surface temperatures also were taken, fluctuating from 15,4 °C to 17,2 °C. Diatoms were the principal component of the phytoplankton community and were more related with the total of phytoplankton. Other groups as Dinoflagellates, Coccolitophorids, Silicoflagellates and small flagellates were present but were less important. Skeletonema costatum was the dominant specie during the first nine days of sampling, after that it was substituted by Thalassionema nitzschioides, which remained as dominant until the end of the study. Small variation in species composition but large fluctuations in density of phytoplankton were recorded over a period of few hours. Small increments in temperature influenced in the phytoplankton assemblages.

  10. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition.

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  11. Plant reproductive allocation predicts herbivore dynamics across spatial and temporal scales.

    Miller, Tom E X; Tyre, Andrew J; Louda, Svata M

    2006-11-01

    Life-history theory suggests that iteroparous plants should be flexible in their allocation of resources toward growth and reproduction. Such plasticity could have consequences for herbivores that prefer or specialize on vegetative versus reproductive structures. To test this prediction, we studied the response of the cactus bug (Narnia pallidicornis) to meristem allocation by tree cholla cactus (Opuntia imbricata). We evaluated the explanatory power of demographic models that incorporated variation in cactus relative reproductive effort (RRE; the proportion of meristems allocated toward reproduction). Field data provided strong support for a single model that defined herbivore fecundity as a time-varying, increasing function of host RRE. High-RRE plants were predicted to support larger insect populations, and this effect was strongest late in the season. Independent field data provided strong support for these qualitative predictions and suggested that plant allocation effects extend across temporal and spatial scales. Specifically, late-season insect abundance was positively associated with interannual changes in cactus RRE over 3 years. Spatial variation in insect abundance was correlated with variation in RRE among five cactus populations across New Mexico. We conclude that plant allocation can be a critical component of resource quality for insect herbivores and, thus, an important mechanism underlying variation in herbivore abundance across time and space.

  12. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve. PMID:29081753

  13. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Min Peng

    2017-10-01

    Full Text Available Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  14. Measuring and modeling the temporal dynamics of nitrogen balance in an experimental-scale paddy field

    Tseng, C.; Lin, Y.

    2013-12-01

    Nitrogen balance involves many mechanisms and plays an important role to maintain the function of nature. Fertilizer application in agriculture activity is usually seen as a common and significant nitrogen input to environment. Improper fertilizer application on paddy field can result in great amount of various types of nitrogen losses. Hence, it is essential to understand and quantify the nitrogen dynamics in paddy field for fertilizer management and pollution control. In this study, we develop a model which considers major transformation processes of nitrogen (e.g. volatilization, nitrification, denitrification and plant uptake). In addition, we measured different types of nitrogen in plants, soil and water at plant growth stages in an experimental-scale paddy field in Taiwan. The measurement includes total nitrogen in plants and soil, and ammonium-N (NH4+-N), nitrate-N (NO3--N) and organic nitrogen in water. The measured data were used to calibrate the model parameters and validate the model for nitrogen balance simulation. The results showed that the model can accurately estimate the temporal dynamics of nitrogen balance in paddy field during the whole growth stage. This model might be helpful and useful for future fertilizer management and pollution control in paddy field.

  15. Temporal scaling law and intrinsic characteristic of laser induced damage on the dielectric coating

    Zhou, Li; Jiang, Youen; Wang, Chao; Wei, Hui; Zhang, Peng; Fan, Wei; Li, Xuechun

    2018-01-01

    High power laser is essential for optical manipulation and fabrication. When the laser travels through optics and to the target finally, irreversible damage on the dielectric coating is always accompanied without knowing the law and principle of laser induced damage. Here, an experimental study of laser induced damage threshold (LIDT) Fth of the dielectric coating under different pulse duration t is implemented. We observe that the temporal scaling law of square pulse for high-reflectivity (HR) coating and anti-reflectivity (AR) coating are Fth = 9.53t0.47 and Fth = 6.43t0.28 at 1064 nm, respectively. Moreover, the intrinsic LIDT of HR coating is 62.7 J/cm2 where the coating is just 100% damaged by gradually increasing the fluence densities of a 5ns-duration pulse, which is much higher than the actual LIDT of 18.6 J/cm2. Thus, a more robust and reliable high power laser system will be a reality, even working at very high fluence, if measures are taken to improve the actual LIDT to a considerable level near the intrinsic value.

  16. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  17. Prediction accuracy and stability of regression with optimal scaling transformations

    Kooij, van der Anita J.

    2007-01-01

    The central topic of this thesis is the CATREG approach to nonlinear regression. This approach finds optimal quantifications for categorical variables and/or nonlinear transformations for numerical variables in regression analysis. (CATREG is implemented in SPSS Categories by the author of the

  18. Temporal scales, ecosystem dynamics, stakeholders and the valuation of ecosystems services

    Hein, Lars; Koppen, van C.S.A.K.; Ierland, van Ekko C.; Leidekker, Jakob

    2016-01-01

    Temporal dimensions are highly relevant to the analysis of ecosystem services and their economic value. In this paper, we provide a framework that can be used for analyzing temporal dimensions of ecosystem services, we present a case study including an analysis of the supply of three ecosystem

  19. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    Ghattas, Omar

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  20. Multi-scale temporal and spatial variation in genotypic composition of Cladophora-borne Escherichia coli populations in Lake Michigan.

    Badgley, Brian D; Ferguson, John; Vanden Heuvel, Amy; Kleinheinz, Gregory T; McDermott, Colleen M; Sandrin, Todd R; Kinzelman, Julie; Junion, Emily A; Byappanahalli, Muruleedhara N; Whitman, Richard L; Sadowsky, Michael J

    2011-01-01

    High concentrations of Escherichia coli in mats of Cladophora in the Great Lakes have raised concern over the continued use of this bacterium as an indicator of microbial water quality. Determining the impacts of these environmentally abundant E. coli, however, necessitates a better understanding of their ecology. In this study, the population structure of 4285 Cladophora-borne E. coli isolates, obtained over multiple three day periods from Lake Michigan Cladophora mats in 2007-2009, was examined by using DNA fingerprint analyses. In contrast to previous studies that have been done using isolates from attached Cladophora obtained over large time scales and distances, the extensive sampling done here on free-floating mats over successive days at multiple sites provided a large dataset that allowed for a detailed examination of changes in population structure over a wide range of spatial and temporal scales. While Cladophora-borne E. coli populations were highly diverse and consisted of many unique isolates, multiple clonal groups were also present and accounted for approximately 33% of all isolates examined. Patterns in population structure were also evident. At the broadest scales, E. coli populations showed some temporal clustering when examined by year, but did not show good spatial distinction among sites. E. coli population structure also showed significant patterns at much finer temporal scales. Populations were distinct on an individual mat basis at a given site, and on individual days within a single mat. Results of these studies indicate that Cladophora-borne E. coli populations consist of a mixture of stable, and possibly naturalized, strains that persist during the life of the mat, and more unique, transient strains that can change over rapid time scales. It is clear that further study of microbial processes at fine spatial and temporal scales is needed, and that caution must be taken when interpolating short term microbial dynamics from results obtained

  1. Green smartphone GPUs: Optimizing energy consumption using GPUFreq scaling governors

    Ahmad, Enas M.

    2015-10-19

    Modern smartphones are limited by their short battery life. The advancement of the graphical performance is considered as one of the main reasons behind the massive battery drainage in smartphones. In this paper we present a novel implementation of the GPUFreq Scaling Governors, a Dynamic Voltage and Frequency Scaling (DVFS) model implemented in the Android Linux kernel for dynamically scaling smartphone Graphical Processing Units (GPUs). The GPUFreq governors offer users multiple variations and alternatives in controlling the power consumption and performance of their GPUs. We implemented and evaluated our model on a smartphone GPU and measured the energy performance using an external power monitor. The results show that the energy consumption of smartphone GPUs can be significantly reduced with a minor effect on the GPU performance.

  2. Spatial patterns and temporal dynamics of global scale climate-groundwater interactions

    Cuthbert, M. O.; Gleeson, T. P.; Moosdorf, N.; Schneider, A. C.; Hartmann, J.; Befus, K. M.; Lehner, B.

    2017-12-01

    The interactions between groundwater and climate are important to resolve in both space and time as they influence mass and energy transfers at Earth's land surface. Despite the significance of these processes, little is known about the spatio-temporal distribution of such interactions globally, and many large-scale climate, hydrological and land surface models oversimplify groundwater or exclude it completely. In this study we bring together diverse global geomatic data sets to map spatial patterns in the sensitivity and degree of connectedness between the water table and the land surface, and use the output from a global groundwater model to assess the locations where the lateral import or export of groundwater is significant. We also quantify the groundwater response time, the characteristic time for groundwater systems to respond to a change in boundary conditions, and map its distribution globally to assess the likely dynamics of groundwater's interaction with climate. We find that more than half of the global land surface significantly exports or imports groundwater laterally. Nearly 40% of Earth's landmass has water tables that are strongly coupled to topography with water tables shallow enough to enable a bi-directional exchange of moisture with the climate system. However, only a small proportion (around 12%) of such regions have groundwater response times of 100 years or less and have groundwater fluxes that would significantly respond to rapid environmental changes over this timescale. We last explore fundamental relationships between aridity, groundwater response times and groundwater turnover times. Our results have wide ranging implications for understanding and modelling changes in Earth's water and energy balance and for informing robust future water management and security decisions.

  3. [Spatial and temporal variations of hydrological characteristic on the landscape zone scale in alpine cold region].

    Yang, Yong-Gang; Hu, Jin-Fei; Xiao, Hong-Lang; Zou, Song-Bing; Yin, Zhen-Liang

    2013-10-01

    There are few studies on the hydrological characteristics on the landscape zone scale in alpine cold region at present. This paper aimed to identify the spatial and temporal variations in the origin and composition of the runoff, and to reveal the hydrological characteristics in each zone, based on the isotopic analysis of glacier, snow, frozen soil, groundwater, etc. The results showed that during the wet season, heavy precipitation and high temperature in the Mafengou River basin caused secondary evaporation which led to isotope fractionation effects. Therefore, the isotope values remained high. Temperature effects were significant. During the dry season, the temperature was low. Precipitation was in the solid state during the cold season and the evaporation was weak. Water vapor came from the evaporation of local water bodies. Therefore, less secondary evaporation and water vapor exchange occurred, leading to negative values of delta18O and deltaD. delta18O and deltaD values of precipitation and various water bodies exhibited strong seasonal variations. Precipitation exhibited altitude effects, delta18O = -0. 005 2H - 8. 951, deltaD = -0.018 5H - 34. 873. Other water bodies did not show altitude effects in the wet season and dry season, because the runoff was not only recharged by precipitation, but also influenced by the freezing and thawing process of the glacier, snow and frozen soil. The mutual transformation of precipitation, melt water, surface water and groundwater led to variations in isotopic composition. Therefore, homogenization and evaporation effect are the main control factors of isotope variations.

  4. Large-Scale Systems Control Design via LMI Optimization

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  5. Watershed Scale Optimization to Meet Sustainable Cellulosic Energy Crop Demand

    Chaubey, Indrajeet [Purdue Univ., West Lafayette, IN (United States); Cibin, Raj [Purdue Univ., West Lafayette, IN (United States); Bowling, Laura [Purdue Univ., West Lafayette, IN (United States); Brouder, Sylvie [Purdue Univ., West Lafayette, IN (United States); Cherkauer, Keith [Purdue Univ., West Lafayette, IN (United States); Engel, Bernard [Purdue Univ., West Lafayette, IN (United States); Frankenberger, Jane [Purdue Univ., West Lafayette, IN (United States); Goforth, Reuben [Purdue Univ., West Lafayette, IN (United States); Gramig, Benjamin [Purdue Univ., West Lafayette, IN (United States); Volenec, Jeffrey [Purdue Univ., West Lafayette, IN (United States)

    2017-03-24

    The overall goal of this project was to conduct a watershed-scale sustainability assessment of multiple species of energy crops and removal of crop residues within two watersheds (Wildcat Creek, and St. Joseph River) representative of conditions in the Upper Midwest. The sustainability assessment included bioenergy feedstock production impacts on environmental quality, economic costs of production, and ecosystem services.

  6. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  7. Optimal control strategy to reduce the temporal wavefront error in AO systems

    Doelman, N.J.; Hinnen, K.J.G.; Stoffelen, F.J.G.; Verhaegen, M.H.

    2004-01-01

    An Adaptive Optics (AO) system for astronomy is analysed from a control point of view. The focus is put on the temporal error. The AO controller is identified as a feedback regulator system, operating in closed-loop with the aim of rejecting wavefront disturbances. Limitations on the performance of

  8. Optimization in the nuclear fuel cycle I: Temporal variation of dose rate

    Pereira, W.S.; Silva, A.X.; Lopes, J.M.; Carmo, A.S.; Fernandes, T.S.; Mello, C.R.; Kelecom, A.

    2017-01-01

    Radioprotection aims to protect man and the environment from the harmful effects of radiation. Radioprotection is based on three fundamental principles: justification, dose limitation and optimization. Optimization is a complementary principle to dose limitation and should be applied in all phases of development, and even in unregulated situations. The aim of this work is to use the exposure rate as a tool to optimize radioprotection. The exposure rate at a nuclear facility was monitored at 15 points for one year and statistical tools for data analysis were proposed as auxiliary tools for the process of optimizing the dose rates measured at the facility. A total of 9,125 exposure-rate measures were performed during 2014. The monthly averages were organized by sampling point and by month of the year. No statistical difference was observed in the monthly variation of the dose rate. Therefore, this variable can not be used in the optimization process in this nuclear installation

  9. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  10. Optimal sampling designs for large-scale fishery sample surveys in Greece

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  11. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  12. Reliability analysis of large scaled structures by optimization technique

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  13. Optimized evaporation technique for leachate treatment: Small scale implementation.

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  14. Spatial and temporal analysis of drought variability at several time scales in Syria during 1961-2012

    Mathbout, Shifa; Lopez-Bustins, Joan A.; Martin-Vide, Javier; Bech, Joan; Rodrigo, Fernando S.

    2018-02-01

    This paper analyses the observed spatiotemporal characteristics of drought phenomenon in Syria using the Standardised Precipitation Index (SPI) and the Standardised Precipitation Evapotranspiration Index (SPEI). Temporal variability of drought is calculated for various time scales (3, 6, 9, 12, and 24 months) for 20 weather stations over the 1961-2012 period. The spatial patterns of drought were identified by applying a Principal Component Analysis (PCA) to the SPI and SPEI values at different time scales. The results revealed three heterogeneous and spatially well-defined regions with different temporal evolution of droughts: 1) Northeastern (inland desert); 2) Southern (mountainous landscape); 3) Northwestern (Mediterranean coast). The evolutionary characteristics of drought during 1961-2012 were analysed including spatial and temporal variability of SPI and SPEI, the frequency distribution, and the drought duration. The results of the non-parametric Mann-Kendall test applied to the SPI and SPEI series indicate prevailing significant negative trends (drought) at all stations. Both drought indices have been correlated both on spatial and temporal scales and they are highly comparable, especially, over a 12 and 24 month accumulation period. We concluded that the temporal and spatial characteristics of the SPI and SPEI can be used for developing a drought intensity - areal extent - and frequency curve that assesses the variability of regional droughts in Syria. The analysis of both indices suggests that all three regions had a severe drought in the 1990s, which had never been observed before in the country. Furthermore, the 2007-2010 drought was the driest period in the instrumental record, happening just before the onset of the recent conflict in Syria.

  15. Optimal control for power-off landing of a small-scale helicopter : a pseudospectral approach

    Taamallah, S.; Bombois, X.; Hof, Van den P.M.J.

    2012-01-01

    We derive optimal power-off landing trajectories, for the case of a small-scale helicopter UAV. These open-loop optimal trajectories represent the solution to the minimization of a cost objective, given system dynamics, controls and states equality and inequality constraints. The plant dynamics

  16. Optimization of large scale food production using Lean Manufacturing principles

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results...... show that Lean can be applied and used to manage the production of meals in the kitchen....

  17. Design Optimization of Radionuclide Nano-Scale Batteries

    Schoenfeld, D.W.; Tulenko, J.S.; Wang, J.; Smith, B.

    2004-01-01

    Radioisotopes have been used for power sources in heart pacemakers and space applications dating back to the 50's. Two key properties of radioisotope power sources are high energy density and long half-life compared to chemical batteries. The tritium battery used in heart pacemakers exceeds 500 mW--hr, and is being evaluated by the University of Florida for feasibility as a MEMS (MicroElectroMechanical Systems) power source. Conversion of radioisotope sources into electrical power within the constraints of nano-scale dimensions requires cutting-edge technologies and novel approaches. Some advances evolving in the III-V and II-IV semiconductor families have led to a broader consideration of radioisotopes rather free of radiation damage limitations. Their properties can lead to novel battery configurations designed to convert externally located emissions from a highly radioactive environment. This paper presents results for the analytical computational assisted design and modeling of semiconductor prototype nano-scale radioisotope nuclear batteries from MCNP and EGS programs. The analysis evaluated proposed designs and was used to guide the selection of appropriate geometries, material properties, and specific activities to attain power requirements for the MEMS batteries. Plans utilizing high specific activity radioisotopes were assessed in the investigation of designs employing multiple conversion cells and graded junctions with varying band gap properties. Voltage increases sought by serial combination of VOC s are proposed to overcome some of the limitations of a low power density. The power density is directly dependent on the total active areas

  18. Supra-threshold scaling, temporal summation, and after-sensation: relationships to each other and anxiety/fear

    Robinson, Michael E; Bialosky, Joel E; Bishop, Mark D; Price, Donald D; George, Steven Z

    2010-01-01

    This study investigated the relationship of thermal pain testing from three types of quantitative sensory testing (ie, supra-threshold stimulus response scaling, temporal summation, and after-sensation) at three anatomical sites (ie, upper extremity, lower extremity, and trunk). Pain ratings from these procedures were also compared with common psychological measures previously shown to be related to experimental pain responses and consistent with fear-avoidance models of pain. Results indicat...

  19. The two-parametric scaling and new temporal asymptotic of survival probability of diffusing particle in the medium with traps.

    Arkhincheev, V E

    2017-03-01

    The new asymptotic behavior of the survival probability of particles in a medium with absorbing traps in an electric field has been established in two ways-by using the scaling approach and by the direct solution of the diffusion equation in the field. It has shown that at long times, this drift mechanism leads to a new temporal behavior of the survival probability of particles in a medium with absorbing traps.

  20. Spatio-temporal variability of soil water content on the local scale in a Mediterranean mountain area (Vallcebre, North Eastern Spain). How different spatio-temporal scales reflect mean soil water content

    Molina, Antonio J.; Latron, Jérôme; Rubio, Carles M.; Gallart, Francesc; Llorens, Pilar

    2014-08-01

    As a result of complex human-land interactions and topographic variability, many Mediterranean mountain catchments are covered by agricultural terraces that have locally modified the soil water content dynamic. Understanding these local-scale dynamics helps us grasp better how hydrology behaves on the catchment scale. Thus, this study examined soil water content variability in the upper 30 cm of the soil on a Mediterranean abandoned terrace in north-east Spain. Using a dataset of high spatial (regular grid of 128 automatic TDR probes at 2.5 m intervals) and temporal (20-min time step) resolution, gathered throughout a 84-day period, the spatio-temporal variability of soil water content at the local scale and the way that different spatio-temporal scales reflect the mean soil water content were investigated. Soil water content spatial variability and its relation to wetness conditions were examined, along with the spatial structuring of the soil water content within the terrace. Then, the ability of single probes and of different combinations of spatial measurements (transects and grids) to provide a good estimate of mean soil water content on the terrace scale was explored by means of temporal stability analyses. Finally, the effect of monitoring frequency on the magnitude of detectable daily soil water content variations was studied. Results showed that soil water content spatial variability followed a bimodal pattern of increasing absolute variability with increasing soil water content. In addition, a linear trend of decreasing soil water content as the distance from the inner part of the terrace increased was identified. Once this trend was subtracted, resulting semi-variograms suggested that the spatial resolution examined was too high to appreciate spatial structuring in the data. Thus, the spatial pattern should be considered as random. Of all the spatial designs tested, the 10 × 10 m mesh grid (9 probes) was considered the most suitable option for a good

  1. A common and optimized age scale for Antarctic ice cores

    Parrenin, F.; Veres, D.; Landais, A.; Bazin, L.; Lemieux-Dudon, B.; Toye Mahamadou Kele, H.; Wolff, E.; Martinerie, P.

    2012-04-01

    Dating ice cores is a complex problem because 1) there is a age shift between the gas bubbles and the surrounding ice 2) there are many different ice cores which can be synchronized with various proxies and 3) there are many methods to date the ice and the gas bubbles, each with advantages and drawbacks. These methods fall into the following categories: 1) Ice flow (for the ice) and firn densification modelling (for the gas bubbles); 2) Comparison of ice core proxies with insolation variations (so-called orbital tuning methods); 3) Comparison of ice core proxies with other well dated archives; 4) Identification of well-dated horizons, such as tephra layers or geomagnetic anomalies. Recently, an new dating tool has been developped (DATICE, Lemieux-Dudon et al., 2010), to take into account all the different dating information into account and produce a common and optimal chronology for ice cores with estimated confidence intervals. In this talk we will review the different dating information for Antarctic ice cores and show how the DATICE tool can be applied.

  2. Optimization of large-scale fabrication of dielectric elastomer transducers

    Hassouneh, Suzan Sager

    Dielectric elastomers (DEs) have gained substantial ground in many different applications, such as wave energy harvesting, valves and loudspeakers. For DE technology to be commercially viable, it is necessary that any large-scale production operation is nondestructive, efficient and cheap. Danfoss......-strength laminates to perform as monolithic elements. For the front-to-back and front-to-front configurations, conductive elastomers were utilised. One approach involved adding the cheap and conductive filler, exfoliated graphite (EG) to a PDMS matrix to increase dielectric permittivity. The results showed that even...... as conductive adhesives were rejected. Dielectric properties below the percolation threshold were subsequently investigated, in order to conclude the study. In order to avoid destroying the network structure, carbon nanotubes (CNTs) were used as fillers during the preparation of the conductive elastomers...

  3. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2014-01-01

    The investment of large scale offshore wind farms is high in which the electrical system has a significant contribution to the total cost. As one of the key components, the cost of the connection cables affects the initial investment a lot. The development of cable manufacturing provides a vast...... and systematical way for the optimal selection of cables in large scale offshore wind farms....

  4. How to develop a customer satisfaction scale with optimal construct validity

    Terpstra, M.J.; Kuijlen, A.A.A.; Sijtsma, K.

    2014-01-01

    In this article, we investigate how to construct a customer satisfaction (CS) scale which yields optimally valid measurements of the construct of interest. For this purpose we compare three alternative methodologies for scale development and construct validation. Furthermore, we discuss a

  5. SWANS: A Prototypic SCALE Criticality Sequence for Automated Optimization Using the SWAN Methodology

    Greenspan, E.

    2001-01-01

    SWANS is a new prototypic analysis sequence that provides an intelligent, semi-automatic search for the maximum k eff of a given amount of specified fissile material, or of the minimum critical mass. It combines the optimization strategy of the SWAN code with the composition-dependent resonance self-shielded cross sections of the SCALE package. For a given system composition arrived at during the iterative optimization process, the value of k eff is as accurate and reliable as obtained using the CSAS1X Sequence of SCALE-4.4. This report describes how SWAN is integrated within the SCALE system to form the new prototypic optimization sequence, describes the optimization procedure, provides a user guide for SWANS, and illustrates its application to five different types of problems. In addition, the report illustrates that resonance self-shielding might have a significant effect on the maximum k eff value a given fissile material mass can have

  6. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  7. A reduced scale two loop PWR core designed with particle swarm optimization technique

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  8. OPTIMAL INFORMATION EXTRACTION OF LASER SCANNING DATASET BY SCALE-ADAPTIVE REDUCTION

    Y. Zang

    2018-04-01

    Full Text Available 3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  9. Optimization of Classification Strategies of Acetowhite Temporal Patterns towards Improving Diagnostic Performance of Colposcopy

    Karina Gutiérrez-Fragoso

    2017-01-01

    Full Text Available Efforts have been being made to improve the diagnostic performance of colposcopy, trying to help better diagnose cervical cancer, particularly in developing countries. However, improvements in a number of areas are still necessary, such as the time it takes to process the full digital image of the cervix, the performance of the computing systems used to identify different kinds of tissues, and biopsy sampling. In this paper, we explore three different, well-known automatic classification methods (k-Nearest Neighbors, Naïve Bayes, and C4.5, in addition to different data models that take full advantage of this information and improve the diagnostic performance of colposcopy based on acetowhite temporal patterns. Based on the ROC and PRC area scores, the k-Nearest Neighbors and discrete PLA representation performed better than other methods. The values of sensitivity, specificity, and accuracy reached using this method were 60% (95% CI 50–70, 79% (95% CI 71–86, and 70% (95% CI 60–80, respectively. The acetowhitening phenomenon is not exclusive to high-grade lesions, and we have found acetowhite temporal patterns of epithelial changes that are not precancerous lesions but that are similar to positive ones. These findings need to be considered when developing more robust computing systems in the future.

  10. Creating an Optimal 3D Printed Model for Temporal Bone Dissection Training.

    Takahashi, Kuniyuki; Morita, Yuka; Ohshima, Shinsuke; Izumi, Shuji; Kubota, Yamato; Yamamoto, Yutaka; Takahashi, Sugata; Horii, Arata

    2017-07-01

    Making a 3-dimensional (3D) temporal bone model is simple using a plaster powder bed and an inkjet printer. However, it is difficult to reproduce air-containing spaces and precise middle ear structures. The objective of this study was to overcome these problems and create a temporal bone model that would be useful both as a training tool and for preoperative simulation. Drainage holes were made to remove excess materials from air-containing spaces, ossicle ligaments were manually changed to bony structures, and small and/or soft tissue structures were colored differently while designing the 3D models. The outcomes were evaluated by 3 procedures: macroscopic and endoscopic inspection of the model, comparison of computed tomography (CT) images of the model to the original CT, and assessment of tactile sensation and reproducibility by 20 surgeons performing surgery on the model. Macroscopic and endoscopic inspection, CT images, and assessment by surgeons were in agreement in terms of reproducibility of model structures. Most structures could be reproduced, but the stapes, tympanic sinus, and mastoid air cells were unsatisfactory. Perioperative tactile sensation of the model was excellent. Although this model still does not embody perfect reproducibility, it proved sufficiently practical for use in surgical training.

  11. MODELLING TEMPORAL SCHEDULE OF URBAN TRAINS USING AGENT-BASED SIMULATION AND NSGA2-BASED MULTIOBJECTIVE OPTIMIZATION APPROACHES

    M. Sahelgozin

    2015-12-01

    Full Text Available Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  12. Joint sensor location/power rating optimization for temporally-correlated source estimation

    Bushnaq, Osama M.; Chaaban, Anas; Al-Naffouri, Tareq Y.

    2017-01-01

    via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs

  13. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  14. Temporal integration of loudness measured using categorical loudness scaling and matching procedures

    Valente, Daniel L.; Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effectively matched in loudness, allowing the measurement of temporal integration with CLS without curve...

  15. Monitoring scale-specific and temporal variation in electromagnetic conductivity images

    In the semi-arid and arid landscapes of southwest USA, irrigation sustains agricultural activity; however, there are increasing demands on water resources. As such spatial temporal variation of soil moisture needs to be monitored. One way to do this is to use electromagnetic (EM) induction instrumen...

  16. New scale-down methodology from commercial to lab scale to optimize plant-derived soft gel capsule formulations on a commercial scale.

    Oishi, Sana; Kimura, Shin-Ichiro; Noguchi, Shuji; Kondo, Mio; Kondo, Yosuke; Shimokawa, Yoshiyuki; Iwao, Yasunori; Itai, Shigeru

    2018-01-15

    A new scale-down methodology from commercial rotary die scale to laboratory scale was developed to optimize a plant-derived soft gel capsule formulation and eventually manufacture superior soft gel capsules on a commercial scale, in order to reduce the time and cost for formulation development. Animal-derived and plant-derived soft gel film sheets were prepared using an applicator on a laboratory scale and their physicochemical properties, such as tensile strength, Young's modulus, and adhesive strength, were evaluated. The tensile strength of the animal-derived and plant-derived soft gel film sheets was 11.7 MPa and 4.41 MPa, respectively. The Young's modulus of the animal-derived and plant-derived soft gel film sheets was 169 MPa and 17.8 MPa, respectively, and both sheets showed a similar adhesion strength of approximately 4.5-10 MPa. Using a D-optimal mixture design, plant-derived soft gel film sheets were prepared and optimized by varying their composition, including variations in the mass of κ-carrageenan, ι-carrageenan, oxidized starch and heat-treated starch. The physicochemical properties of the sheets were evaluated to determine the optimal formulation. Finally, plant-derived soft gel capsules were manufactured using the rotary die method and the prepared soft gel capsules showed equivalent or superior physical properties compared with pre-existing soft gel capsules. Therefore, we successfully developed a new scale-down methodology to optimize the formulation of plant-derived soft gel capsules on a commercial scale. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Marine ecosystem acoustics (MEA): Quantifying processes in the sea at the spatio-temporal scales on which they occur

    Godøl, Olav Rune

    2014-07-22

    Sustainable management of fisheries resources requires quantitative knowledge and understanding of species distribution, abundance, and productivity-determining processes. Conventional sampling by physical capture is inconsistent with the spatial and temporal scales on which many of these processes occur. In contrast, acoustic observations can be obtained on spatial scales from centimetres to ocean basins, and temporal scales from seconds to seasons. The concept of marine ecosystem acoustics (MEA) is founded on the basic capability of acoustics to detect, classify, and quantify organisms and biological and physical heterogeneities in the water column. Acoustics observations integrate operational technologies, platforms, and models and can generate information by taxon at the relevant scales. The gaps between single-species assessment and ecosystem-based management, as well as between fisheries oceanography and ecology, are thereby bridged. The MEA concept combines state-of-the-art acoustic technology with advanced operational capabilities and tailored modelling integrated into a flexible tool for ecosystem research and monitoring. Case studies are presented to illustrate application of the MEA concept in quantification of biophysical coupling, patchiness of organisms, predator-prey interactions, and fish stock recruitment processes. Widespread implementation of MEA will have a large impact on marine monitoring and assessment practices and it is to be hoped that they also promote and facilitate interaction among disciplines within the marine sciences.

  18. A novel way to detect correlations on multi-time scales, with temporal evolution and for multi-variables

    Yuan, Naiming; Xoplaki, Elena; Zhu, Congwen; Luterbacher, Juerg

    2016-06-01

    In this paper, two new methods, Temporal evolution of Detrended Cross-Correlation Analysis (TDCCA) and Temporal evolution of Detrended Partial-Cross-Correlation Analysis (TDPCCA), are proposed by generalizing DCCA and DPCCA. Applying TDCCA/TDPCCA, it is possible to study correlations on multi-time scales and over different periods. To illustrate their properties, we used two climatological examples: i) Global Sea Level (GSL) versus North Atlantic Oscillation (NAO); and ii) Summer Rainfall over Yangtze River (SRYR) versus previous winter Pacific Decadal Oscillation (PDO). We find significant correlations between GSL and NAO on time scales of 60 to 140 years, but the correlations are non-significant between 1865-1875. As for SRYR and PDO, significant correlations are found on time scales of 30 to 35 years, but the correlations are more pronounced during the recent 30 years. By combining TDCCA/TDPCCA and DCCA/DPCCA, we proposed a new correlation-detection system, which compared to traditional methods, can objectively show how two time series are related (on which time scale, during which time period). These are important not only for diagnosis of complex system, but also for better designs of prediction models. Therefore, the new methods offer new opportunities for applications in natural sciences, such as ecology, economy, sociology and other research fields.

  19. Scale economies and optimal size in the Swiss gas distribution sector

    Alaeifar, Mozhgan; Farsi, Mehdi; Filippini, Massimo

    2014-01-01

    This paper studies the cost structure of Swiss gas distribution utilities. Several econometric models are applied to a panel of 26 companies over 1996–2000. Our main objective is to estimate the optimal size and scale economies of the industry and to study their possible variation with respect to network characteristics. The results indicate the presence of unexploited scale economies. However, very large companies in the sample and companies with a disproportionate mixture of output and density present an exception. Furthermore, the estimated optimal size for majority of companies in the sample has shown a value far greater than the actual size, suggesting remarkable efficiency gains by reorganization of the industry. The results also highlight the effect of customer density on optimal size. Networks with higher density or greater complexity have a lower optimal size. - highlights: • Presence of unexploited scale economies for small and medium sized companies. • Scale economies vary considerably with customer density. • Higher density or greater complexity is associated with lower optimal size. • Optimal size varies across the companies through unobserved heterogeneity. • Firms with low density can gain more from expanding firm size

  20. Spatio-temporal model based optimization framework to design future hydrogen infrastructure networks

    Konda, N.V.S.; Shah, N.; Brandon, N.P.

    2009-01-01

    A mixed integer programming (MIP) spatio-temporal model was used to design hydrogen infrastructure networks for the Netherlands. The detailed economic analysis was conducted using a multi-echelon model of the entire hydrogen supply chain, including feed, production, storage, and transmission-distribution systems. The study considered various near-future and commercially available technologies. A multi-period model was used to design evolutionary hydrogen supply networks in coherence with growing demand. A scenario-based analysis was conducted in order to account for uncertainties in future demand. The study showed that competitive hydrogen networks can be designed for any conceivable scenario. It was concluded that the multi-period model presented significant advantages in relation to decision-making over long time-horizons

  1. Joint sensor location/power rating optimization for temporally-correlated source estimation

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  2. Improving fire season definition by optimized temporal modelling of daily human-caused ignitions.

    Costafreda-Aumedes, S; Vega-Garcia, C; Comas, C

    2018-07-01

    Wildfire suppression management is usually based on fast control of all ignitions, especially in highly populated countries with pervasive values-at-risk. To minimize values-at-risk loss by improving response time of suppression resources it is necessary to anticipate ignitions, which are mainly caused by people. Previous studies have found that human-ignition patterns change spatially and temporally depending on socio-economic activities, hence, the deployment of suppression resources along the year should consider these patterns. However, full suppression capacity is operational only within legally established fire seasons, driven by past events and budgets, which limits response capacity and increases damages out of them. The aim of this study was to assess the temporal definition of fire seasons from the perspective of human-ignition patterns for the case study of Spain, where people cause over 95% of fires. Humans engage in activities that use fire as a tool in certain periods within a year, and in locations linked to specific spatial factors. Geographic variables (population, infrastructures, physiography and land uses) were used as explanatory variables for human-ignition patterns. The changing influence of these geographic variables on occurrence along the year was analysed with day-by-day logistic regression models. Daily models were built for all the municipal units in the two climatic regions in Spain (Atlantic and Mediterranean Spain) from 2002 to 2014, and similar models were grouped within continuous periods, designated as ignition-based seasons. We found three ignition-based seasons in the Mediterranean region and five in the Atlantic zones, not coincidental with calendar seasons, but with a high degree of agreement with current legally designated operational fire seasons. Our results suggest that an additional late-winter-early-spring fire season in the Mediterranean area and the extension of this same season in the Atlantic zone should be re

  3. Spatial and temporal distribution of pore gas concentrations during mainstream large-scale trough composting in China.

    Zeng, Jianfei; Shen, Xiuli; Sun, Xiaoxi; Liu, Ning; Han, Lujia; Huang, Guangqun

    2018-05-01

    With the advantages of high treatment capacity and low operational cost, large-scale trough composting has become one of the mainstream composting patterns in composting plants in China. This study measured concentrations of O 2 , CO 2 , CH 4 and NH 3 on-site to investigate the spatial and temporal distribution of pore gas concentrations during mainstream large-scale trough composting in China. The results showed that the temperature in the center of the pile was obviously higher than that in the side of the pile. Pore O 2 concentration rapidly decreased and maintained composting process during large-scale trough composting when the pile was naturally aerated, which will contribute to improving the current undesirable atmosphere environment in China. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Optimizing rice yields while minimizing yield-scaled global warming potential.

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  5. Supra-threshold scaling, temporal summation, and after-sensation: relationships to each other and anxiety/fear

    Michael E Robinson

    2010-03-01

    Full Text Available Michael E Robinson1, Joel E Bialosky2, Mark D Bishop2, Donald D Price3, Steven Z George21Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA; 2Department of Physical Therapy, University of Florida, Gainesville, FL, USA; 3Dentistry and Neurosciences, University of Florida,  Gainesville, FL, USAAbstract: This study investigated the relationship of thermal pain testing from three types of quantitative sensory testing (ie, supra-threshold stimulus response scaling, temporal summation, and after-sensation at three anatomical sites (ie, upper extremity, lower extremity, and trunk. Pain ratings from these procedures were also compared with common psychological measures previously shown to be related to experimental pain responses and consistent with fear-avoidance models of pain. Results indicated that supra-threshold stimulus response scaling, temporal summation, and after-sensation, were significantly related to each other. The site of stimulation was also an important factor, with the trunk site showing the highest sensitivity in all three quantitative sensory testing procedures. Supra-threshold response measures were highly related to measures of fear of pain and anxiety sensitivity for all stimulation sites. For temporal summation and after-sensation, only the trunk site was significantly related to anxiety sensitivity, and fear of pain, respectively. Results suggest the importance of considering site of stimulation when designing and comparing studies. Furthermore, psychological influence on quantitative sensory testing is also of importance when designing and comparing studies. Although there was some variation by site of stimulation, fear of pain and anxiety sensitivity had consistent influences on pain ratings.Keywords: experimental pain, temporal summation, after-sensation, fear/avoidance, anxiety

  6. Assessing future climatic changes of rainfall extremes at small spatio-temporal scales

    Gregersen, Ida Bülow; Sørup, Hjalte Jomo Danielsen; Madsen, Henrik

    2013-01-01

    Climate change is expected to influence the occurrence and magnitude of rainfall extremes and hence the flood risks in cities. Major impacts of an increased pluvial flood risk are expected to occur at hourly and sub-hourly resolutions. This makes convective storms the dominant rainfall type...... in relation to urban flooding. The present study focuses on high-resolution regional climate model (RCM) skill in simulating sub-daily rainfall extremes. Temporal and spatial characteristics of output from three different RCM simulations with 25 km resolution are compared to point rainfall extremes estimated...... from observed data. The applied RCM data sets represent two different models and two different types of forcing. Temporal changes in observed extreme point rainfall are partly reproduced by the RCM RACMO when forced by ERA40 re-analysis data. Two ECHAM forced simulations show similar increases...

  7. A temporal and spatial scaling method for quantifying daily photosynthesis using remote sensing data

    Liu, J.; Chen, W.; Sarich, M. [Intermap Technologies Ltd., Nepean, ON (Canada); Cihlar, J. [Canada Centre for Remote Sensing, Ottawa, ON (Canada); Goulden, M. [California Univ., Irvine, CA (United States)

    1998-06-01

    Remote sensing to monitor the behaviour of terrestrial ecosystems over large areas was discussed. For this type of application the boreal ecosystem productivity simulator (BEPS) was developed, with the subsequent incorporation of the more advanced photosynthetic model. The new model improves the methodology through analytical spatial and temporal integration of canopy photosynthesis processes, and is suitable for regional remote sensing applications at moderate resolutions of 250 to 1000 m. 10 refs., 1 tab., 3 figs.

  8. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  9. Optimal spatio-temporal filter for the reduction of crosstalk in surface electromyogram

    Mesin, Luca

    2018-02-01

    Objective. Crosstalk can pose limitations to the applications of surface electromyogram (EMG). Its reduction can help in the identification of the activity of specific muscles. The selectivity of different spatial filters was tested in the literature both in simulations and experiments: their performances are affected by many factors (e.g. anatomy, conduction properties of the tissues and dimension/location of the electrodes); moreover, they reduce crosstalk by decreasing the detection volume, recording data that represent only the activity of a small portion of the muscle of interest. In this study, an alternative idea is proposed, based on a spatio-temporal filter. Approach. An adaptive method is applied, which filters both in time and among different channels, providing a signal that maximally preserves the energy of the EMG of interest and discards that of nearby muscles (increasing the signal to crosstalk ratio, SCR). Main results. Tests with simulations and experimental data show an average increase of the SCR of about 2 dB with respect to the single or double differential data processed by the filter. This allows to reduce the bias induced by crosstalk in conduction velocity and force estimation. Significance. The method can be applied to few channels, so that it is useful in applicative studies (e.g. clinics, gate analysis, rehabilitation protocols with EMG biofeedback and prosthesis control) where limited and not selective information is usually available.

  10. Modelling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team

    2003-04-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T

  11. Spatial and Temporal Variability in Biogenic Gas Accumulation and Release in The Greater Everglades at Multiple Scales of Measurement

    McClellan, M. D.; Cornett, C.; Schaffer, L.; Comas, X.

    2017-12-01

    Wetlands play a critical role in the carbon (C) cycle by producing and releasing significant amounts of greenhouse biogenic gasses (CO2, CH4) into the atmosphere. Wetlands in tropical and subtropical climates (such as the Florida Everglades) have become of great interest in the past two decades as they account for more than 20% of the global peatland C stock and are located in climates that favor year-round C emissions. Despite the increase in research involving C emission from these types of wetlands, the spatial and temporal variability involving C production, accumulation and release is still highly uncertain, and is the focus of this research at multiple scales of measurement (i.e. lab, field and landscape). Spatial variability in biogenic gas content, build up and release, at both the lab and field scales, was estimated using a series of ground penetrating radar (GPR) surveys constrained with gas traps fitted with time-lapse cameras. Variability in gas content was estimated at the sub-meter scale (lab scale) within two extracted monoliths from different wetland ecosystems at the Disney wilderness Preserve (DWP) and the Blue Cypress Preserve (BCP) using high frequency GPR (1.2 GHz) transects across the monoliths. At the field scale (> 10m) changes in biogenic gas content were estimated using 160 MHz GPR surveys collected within 4 different emergent wetlands at the DWP. Additionally, biogenic gas content from the extracted monoliths was used to developed a landscape comparison of C accumulation and emissions for each different wetland ecosystem. Changes in gas content over time were estimated at the lab scale at high temporal resolution (i.e. sub-hourly) in monoliths from the BCP and Water Conservation Area 1-A. An autonomous rail system was constructed to estimate biogenic gas content variability within the wetland soil matrix using a series of continuous, uninterrupted 1.2 GHz GPR transects along the samples. Measurements were again constrained with an array

  12. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  13. Correlated continuous time random walks: combining scale-invariance with long-range memory for spatial and temporal dynamics

    Schulz, Johannes H P; Chechkin, Aleksei V; Metzler, Ralf

    2013-01-01

    Standard continuous time random walk (CTRW) models are renewal processes in the sense that at each jump a new, independent pair of jump length and waiting time are chosen. Globally, anomalous diffusion emerges through scale-free forms of the jump length and/or waiting time distributions by virtue of the generalized central limit theorem. Here we present a modified version of recently proposed correlated CTRW processes, where we incorporate a power-law correlated noise on the level of both jump length and waiting time dynamics. We obtain a very general stochastic model, that encompasses key features of several paradigmatic models of anomalous diffusion: discontinuous, scale-free displacements as in Lévy flights, scale-free waiting times as in subdiffusive CTRWs, and the long-range temporal correlations of fractional Brownian motion (FBM). We derive the exact solutions for the single-time probability density functions and extract the scaling behaviours. Interestingly, we find that different combinations of the model parameters lead to indistinguishable shapes of the emerging probability density functions and identical scaling laws. Our model will be useful for describing recent experimental single particle tracking data that feature a combination of CTRW and FBM properties. (paper)

  14. Process Inference from High Frequency Temporal Variations in Dissolved Organic Carbon (DOC) Dynamics Across Nested Spatial Scales

    Tunaley, C.; Tetzlaff, D.; Lessels, J. S.; Soulsby, C.

    2014-12-01

    In order to understand aquatic ecosystem functioning it is critical to understand the processes that control the spatial and temporal variations in DOC. DOC concentrations are highly dynamic, however, our understanding at short, high frequency timescales is still limited. Optical sensors which act as a proxy for DOC provide the opportunity to investigate near-continuous DOC variations in order to understand the hydrological and biogeochemical processes that control concentrations at short temporal scales. Here we present inferred 15 minute stream water DOC data for a 12 month period at three nested scales (1km2, 3km2 and 31km2) for the Bruntland Burn, a headwater catchment in NE Scotland. High frequency data were measured using FDOM and CDOM probes which work by measuring the fluorescent component and coloured component, respectively, of DOC when exposed to ultraviolet light. Both FDOM and CDOM were strongly correlated (r2 >0.8) with DOC allowing high frequency estimations. Results show the close coupling of DOC with discharge throughout the sampling period at all three spatial scales. However, analysis at the event scale highlights anticlockwise hysteresis relationships between DOC and discharge due to the delay in DOC being flushed from the increasingly large areas of peaty soils as saturation zones expand and increase hydrological connectivity. Lag times vary between events dependent on antecedent conditions. During a 10 year drought period in late summer 2013 it was apparent that very small changes in discharge on a 15 minute timescale result in high increases in DOC. This suggests transport limitation during this period where DOC builds up in the soil and is not flushed regularly, therefore any subsequent increase in discharge results in large DOC peaks. The high frequency sensors also reveal diurnal variability during summer months related to the photo-oxidation, evaporative and biological influences of DOC during the day. This relationship is less

  15. Adapting crop management practices to climate change: Modeling optimal solutions at the field scale

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.; Walter, A.

    2013-01-01

    Climate change will alter the environmental conditions for crop growth and require adjustments in management practices at the field scale. In this paper, we analyzed the impacts of two different climate change scenarios on optimal field management practices in winterwheat and grain maize production

  16. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  17. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  18. Temporal locality optimizations for stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.

  19. Clinical utility of the Wechsler Memory Scale - Fourth Edition (WMS-IV) in patients with intractable temporal lobe epilepsy.

    Bouman, Zita; Elhorst, Didi; Hendriks, Marc P H; Kessels, Roy P C; Aldenkamp, Albert P

    2016-02-01

    The Wechsler Memory Scale (WMS) is one of the most widely used test batteries to assess memory functions in patients with brain dysfunctions of different etiologies. This study examined the clinical validation of the Dutch Wechsler Memory Scale - Fourth Edition (WMS-IV-NL) in patients with temporal lobe epilepsy (TLE). The sample consisted of 75 patients with intractable TLE, who were eligible for epilepsy surgery, and 77 demographically matched healthy controls. All participants were examined with the WMS-IV-NL. Patients with TLE performed significantly worse than healthy controls on all WMS-IV-NL indices and subtests (p<.01), with the exception of the Visual Working Memory Index including its contributing subtests, as well as the subtests Logical Memory I, Verbal Paired Associates I, and Designs II. In addition, patients with mesiotemporal abnormalities performed significantly worse than patients with lateral temporal abnormalities on the subtests Logical Memory I and Designs II and all the indices (p<.05), with the exception of the Auditory Memory Index and Visual Working Memory Index. Patients with either a left or a right temporal focus performed equally on all WMS-IV-NL indices and subtests (F(15, 50)=.70, p=.78), as well as the Auditory-Visual discrepancy score (t(64)=-1.40, p=.17). The WMS-IV-NL is capable of detecting memory problems in patients with TLE, indicating that it is a sufficiently valid memory battery. Furthermore, the findings support previous research showing that the WMS-IV has limited value in identifying material-specific memory deficits in presurgical patients with TLE. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Simultaneous temporally resolved DPIV and pressure measurements of symmetric oscillations in a scaled-up vocal fold model

    Ringenberg, Hunter; Rogers, Dylan; Wei, Nathaniel; Krane, Michael; Wei, Timothy

    2017-11-01

    The objective of this study is to apply experimental data to theoretical framework of Krane (2013) in which the principal aeroacoustic source is expressed in terms of vocal fold drag, glottal jet dynamic head, and glottal exit volume flow, reconciling formal theoretical aeroacoustic descriptions of phonation with more traditional lumped-element descriptions. These quantities appear in the integral equations of motion for phonatory flow. In this way time resolved velocity field measurements can be used to compute time-resolved estimates of the relevant terms in the integral equations of motion, including phonation aeroacoustic source strength. A simplified 10x scale vocal fold model from Krane, et al. (2007) was used to examine symmetric, i.e. `healthy', oscillatory motion of the vocal folds. By using water as the working fluid, very high spatial and temporal resolution was achieved. Temporal variation of transglottal pressure was simultaneously measured with flow on the vocal fold model mid-height. Experiments were dynamically scaled to examine a range of frequencies corresponding to male and female voice. The simultaneity of the pressure and flow provides new insights into the aeroacoustics associated with vocal fold oscillations. Supported by NIH Grant No. 2R01 DC005642-11.

  1. SVC Planning in Large–scale Power Systems via a Hybrid Optimization Method

    Yang, Guang ya; Majumder, Rajat; Xu, Zhao

    2009-01-01

    The research on allocation of FACTS devices has attracted quite a lot interests from various aspects. In this paper, a hybrid model is proposed to optimise the number, location as well as the parameter settings of static Var compensator (SVC) deployed in large–scale power systems. The model...... utilises the result of vulnerability assessment for determining the candidate locations. A hybrid optimisation method including two stages is proposed to find out the optimal solution of SVC in large– scale planning problem. In the first stage, a conventional genetic algorithm (GA) is exploited to generate...... a candidate solution pool. Then in the second stage, the candidates are presented to a linear planning model to investigate the system optimal loadability, hence the optimal solution for SVC planning can be achieved. The method is presented to IEEE 300–bus system....

  2. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  3. Temporal fractals in seabird foraging behaviour: diving through the scales of time

    Macintosh, Andrew J. J.; Pelletier, Laure; Chiaradia, Andre; Kato, Akiko; Ropert-Coudert, Yan

    2013-05-01

    Animal behaviour exhibits fractal structure in space and time. Fractal properties in animal space-use have been explored extensively under the Lévy flight foraging hypothesis, but studies of behaviour change itself through time are rarer, have typically used shorter sequences generated in the laboratory, and generally lack critical assessment of their results. We thus performed an in-depth analysis of fractal time in binary dive sequences collected via bio-logging from free-ranging little penguins (Eudyptula minor) across full-day foraging trips (216 data points; 4 orders of temporal magnitude). Results from 4 fractal methods show that dive sequences are long-range dependent and persistent across ca. 2 orders of magnitude. This fractal structure correlated with trip length and time spent underwater, but individual traits had little effect. Fractal time is a fundamental characteristic of penguin foraging behaviour, and its investigation is thus a promising avenue for research on interactions between animals and their environments.

  4. The Spatio-Temporal Distribution of Particulate Matter during Natural Dust Episodes at an Urban Scale.

    Helena Krasnov

    Full Text Available Dust storms are a common phenomenon in arid and semi-arid areas, and their impacts on both physical and human environments are of great interest. Number of studies have associated atmospheric PM pollution in urban environments with origin in natural soil/dust, but less evaluated the dust spatial patterns over a city. We aimed to analyze the spatial-temporal behavior of PM concentrations over the city of Beer Sheva, in southern Israel, where dust storms are quite frequent. PM data were recorded during the peak of each dust episode simultaneously in 23 predetermined fixed points around the city. Data were analyzed for both dust days and non-dust days (background. The database was constructed using Geographic Information System and includes distributions of PM that were derived using inverse distance weighted (IDW interpolation. The results show that the daily averages of atmospheric PM10 concentrations during the background period are within a narrow range of 31 to 48 μg m-3 with low variations. During dust days however, the temporal variations are significant and can range from an hourly PM10 concentration of 100 μg m-3 to more than 1280 μg m-3 during strong storms. IDW analysis demonstrates that during the peak time of the storm the spatial variations in PM between locations in the city can reach 400 μg m-3. An analysis of site and storm contribution to total PM concentration revealed that higher concentrations are found in parts of the city that are proximal to dust sources. The results improve the understanding of the dynamics of natural PM and the dependence on wind direction. This may have implications for environmental and health outcomes.

  5. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    Stein, Michael [Univ. of Chicago, IL (United States)

    2017-03-13

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead to predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the

  6. Designing Optimal LNG Station Network for U.S. Heavy-Duty Freight Trucks using Temporally and Spatially Explicit Supply Chain Optimization

    Lee, Allen

    The recent natural gas boom has opened much discussion about the potential of natural gas and specifically Liquefied Natural Gas (LNG) in the United States transportation sector. The switch from diesel to natural gas vehicles would reduce foreign dependence on oil, spur domestic economic growth, and potentially reduce greenhouse gas emissions. LNG provides the most potential for the medium to heavy-duty vehicle market partially due to unstable oil prices and stagnant natural gas prices. As long as the abundance of unconventional gas in the United States remains cheap, fuel switching to natural gas could provide significant cost savings for long haul freight industry. Amid a growing LNG station network and ever increasing demand for freight movement, LNG heavy-duty truck sales are less than anticipated and the industry as a whole is less economic than expected. In spite of much existing and mature natural gas infrastructure, the supply chain for LNG is different and requires explicit and careful planning. This thesis proposes research to explore the claim that the largest obstacle to widespread LNG market penetration is sub-optimal infrastructure planning. No other study we are aware of has explicitly explored the LNG transportation fuel supply chain for heavy-duty freight trucks. This thesis presents a novel methodology that links a network infrastructure optimization model (represents supply side) with a vehicle stock and economic payback model (represents demand side). The model characterizes both a temporal and spatial optimization model of future LNG transportation fuel supply chains in the United States. The principal research goal is to assess the economic feasibility of the current LNG transportation fuel industry and to determine an optimal pathway to achieve ubiquitous commercialization of LNG vehicles in the heavy-duty transport sector. The results indicate that LNG is not economic as a heavy-duty truck fuel until 2030 under current market conditions

  7. Temporal stability and rates of post-depositional change in geochemical signatures of brown trout Salmo trutta scales.

    Ryan, D; Shephard, S; Kelly, F L

    2016-09-01

    This study investigates temporal stability in the scale microchemistry of brown trout Salmo trutta in feeder streams of a large heterogeneous lake catchment and rates of change after migration into the lake. Laser-ablation inductively coupled plasma mass spectrometry was used to quantify the elemental concentrations of Na, Mg, Mn, Cu, Zn, Ba and Sr in archived (1997-2002) scales of juvenile S. trutta collected from six major feeder streams of Lough Mask, County Mayo, Ireland. Water-element Ca ratios within these streams were determined for the fish sampling period and for a later period (2013-2015). Salmo trutta scale Sr and Ba concentrations were significantly (P < 0·05) correlated with stream water sample Sr:Ca and Ba:Ca ratios respectively from both periods, indicating multi-annual stability in scale and water-elemental signatures. Discriminant analysis of scale chemistries correctly classified 91% of sampled juvenile S. trutta to their stream of origin using a cross-validated classification model. This model was used to test whether assumed post-depositional change in scale element concentrations reduced correct natal stream classification of S. trutta in successive years after migration into Lough Mask. Fish residing in the lake for 1-3 years could be reliably classified to their most likely natal stream, but the probability of correct classification diminished strongly with longer lake residence. Use of scale chemistry to identify natal streams of lake S. trutta should focus on recent migrants, but may not require contemporary water chemistry data. © 2016 The Fisheries Society of the British Isles.

  8. THE IMPORTANCE OF LIMIT SOLUTIONS & TEMPORAL AND SPATIAL SCALES IN THE TEACHING OF TRANSPORT PHENOMENA

    SÁVIO LEANDRO BERTOLI

    2016-07-01

    Full Text Available In the engineering courses the field of Transport Phenomena is of significant importance and it is in several disciplines relating to Fluid Mechanics, Heat and Mass Transfer. In these disciplines, problems involving these phenomena are mathematically formulated and analytical solutions are obtained whenever possible. The aim of this paper is to emphasize the possibility of extending aspects of the teaching-learning in this area by a method based on time scales and limit solutions. Thus, aspects relative to the phenomenology naturally arise during the definition of the scales and / or by determining the limit solutions. Aspects concerning the phenomenology of the limit problems are easily incorporated into the proposed development, which contributes significantly to the understanding of physics inherent in the mathematical modeling of each limiting case studied. Finally the study aims to disseminate the use of the limit solutions and of the time scales in the general fields of engineering.

  9. Financial Security and Optimal Scale of Foreign Exchange Reserve in China

    Guangyou Zhou

    2018-05-01

    Full Text Available The study of how foreign exchange reserves maintain financial security is of vital significance. This paper provides simulations and estimations of the optimal scale of foreign exchange reserves under the background of possible shocks to China’s economy due to the further opening of China’s financial market and the sudden stop of capital inflows. Focused on the perspective of financial security, this article tentatively constructs an optimal scale analysis framework that is based on a utility maximization of the foreign exchange reserve, and selects relevant data to simulate the optimal scale of China’s foreign exchange reserves. The results show that: (1 the main reason for the fast growth of the Chinese foreign exchange reserve scale is the structural trouble of its double international payment surplus, which creates long-term appreciation expectations for the exchange rate that make it difficult for international capital inflows and excess foreign exchange reserves to enter the real economic growth mechanism under the model of China’s export-driven economy growth; (2 the average optimal scale of the foreign exchange reserve in case of the sudden stop of capital inflows was calculated through parameter estimation and numerical simulation to be 13.53% of China’s gross domestic product (GDP between 1994 and 2017; (3 with the function of the foreign exchange reserves changing from meeting basic transaction demands to meeting financial security demands, the effect of the foreign exchange reserve maintaining the state’s financial security is becoming more and more obvious. Therefore, the structure of foreign exchange reserve assets should be optimized in China, and we will give full play to the special role of foreign exchange reserve in safeguarding a country’s financial security.

  10. Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants

    Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest

    2017-01-01

    Second generation biorefineries transform lignocellulosic biomass into chemicals with higher added value following a conversion mechanism that consists of: pretreatment, enzymatic hydrolysis, fermentation and purification. The objective of this study is to identify the optimal operational point...... with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...

  11. Optimization of large-scale heterogeneous system-of-systems models.

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  12. Modeling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Reichstein, Markus; Rey, Ana; Freibauer, Annette; Tenhunen, John; Valentini, Riccardo; Banza, Joao; Casals, Pere; Cheng, Yufu; Grünzweig, Jose M.; Irvine, James; Joffre, Richard; Law, Beverly E.; Loustau, Denis; Miglietta, Franco; Oechel, Walter; Ourcival, Jean-Marc; Pereira, Joao S.; Peressotti, Alessandro; Ponti, Francesca; Qi, Ye; Rambal, Serge; Rayment, Mark; Romanya, Joan; Rossi, Federica; Tedeschi, Vanessa; Tirone, Giampiero; Xu, Ming; Yakir, Dan

    2003-12-01

    explain some of the month-to-month variability of soil respiration, it failed to capture the intersite variability, regardless of whether the original or a new optimized model parameterization was used. In both cases, the residuals were strongly related to maximum site leaf area index. Thus, for a monthly timescale, we developed a simple T&P&LAI model that includes leaf area index as an additional predictor of soil respiration. This extended but still simple model performed nearly as well as the more detailed time step model and explained 50% of the overall and 65% of the site-to-site variability. Consequently, better estimates of globally distributed soil respiration should be obtained with the new model driven by satellite estimates of leaf area index. Before application at the continental or global scale, this approach should be further tested in boreal, cold-temperate, and tropical biomes as well as for non-woody vegetation.

  13. Temporal changes in vegetation of a virgin beech woodland remnant: stand-scale stability with intensive fine-scale dynamics governed by stand dynamic events

    Tibor Standovár

    2017-03-01

    Full Text Available The aim of this resurvey study is to check if herbaceous vegetation on the forest floor exhibits overall stability at the stand-scale in spite of intensive dynamics at the scale of individual plots and stand dynamic events (driven by natural fine scale canopy gap dynamics. In 1996, we sampled a 1.5 ha patch using 0.25 m² plots placed along a 5 m × 5 m grid in the best remnant of central European montane beech woods in Hungary. All species in the herbaceous layer and their cover estimates were recorded. Five patches representing different stand developmental situations (SDS were selected for resurvey. In 2013, 306 plots were resurveyed by using blocks of four 0.25 m² plots to test the effects of imperfect relocation. We found very intensive fine-scale dynamics in the herbaceous layer with high species turnover and sharp changes in ground layer cover at the local-scale (< 1 m2. A decrease in species richness and herbaceous layer cover, as well as high species turnover, characterized the closing gaps. Colonization events and increasing species richness and herbaceous layer cover prevailed in the two newly created gaps. A pronounced decrease in the total cover, but low species turnover and survival of the majority of the closed forest specialists was detected by the resurvey at the stand-scale. The test aiming at assessing the effect of relocation showed a higher time effect than the effect of imprecise relocation. The very intensive fine-scale dynamics of the studied beech forest are profoundly determined by natural stand dynamics. Extinction and colonisation episodes even out at the stand-scale, implying an overall compositional stability of the herbaceous vegetation at the given spatial and temporal scale. We argue that fine-scale gap dynamics, driven by natural processes or applied as a management method, can warrant the survival of many closed forest specialist species in the long-run. Nomenclature: Flora Europaea (Tutin et al. 2010 for

  14. Deconstructing the deconstruction of Appalachia: Mountaintop mining effects on hydrology across temporal and spatial scales

    Nippgen, F.; Ross, M. R. V.; Bernhardt, E. S.; McGlynn, B. L.

    2017-12-01

    Mountaintop mining (MTM) is an especially destructive form of surface coal mining. It is widespread in Central Appalachia and is practiced around the world. In the process of accessing coal seams up to several hundred meters below the surface, mountaintops and ridges are removed via explosives and heavy machinery with the resulting overburden pushed into nearby valleys. This broken up rock and soil material represents a largely unknown amount of storage for incoming precipitation that facilitates enhanced chemical weathering rates and increased dissolved solids exports to streams. However, assessing the independent impact of MTM can be difficult in the presence of other forms of mining, especially underground mining. Here, we evaluate the effect of MTM on water quantity and quality on annual, seasonal, and event time scales in two sets of paired watersheds in southwestern West Virginia impacted by MTM. On an annual timescale, the mined watersheds sustained baseflow throughout the year, while the first order watersheds ceased flowing during the latter parts of the growing season. In fractionally mined watersheds that continued to flow, the water in the stream was exclusively generated from mined portions of the watersheds, leading to elevated total dissolved solids in the stream water. On the event time scale, we analyzed 50 storm events over a water year for a range of hydrologic response metrics. The mined watersheds exhibited smaller runoff ratios and longer response times during the wet dormant season, but responded similarly to rainfall events during the growing season or even exceeded the runoff magnitude of the reference watersheds. Our research demonstrates a clear difference in hydrologic response between mined and unmined watersheds during the growing season and the dormant season that are detectable at annual, seasonal, and event time scales. For larger spatial scales (up to 2,000km2) the effect of MTM on water quantity is not as easily detectable. At

  15. Temporal and Spatial Scales Matter: Circannual Habitat Selection by Bird Communities in Vineyards.

    Claire Guyot

    Full Text Available Vineyards are likely to be regionally important for wildlife, but we lack biodiversity studies in this agroecosystem which is undergoing a rapid management revolution. As vine cultivation is restricted to arid and warm climatic regions, biodiversity-friendly management would promote species typical of southern biomes. Vineyards are often intensively cultivated, mostly surrounded by few natural features and offering a fairly mineral appearance with little ground vegetation cover. Ground vegetation cover and composition may further strongly vary with respect to season, influencing patterns of habitat selection by ecological communities. We investigated season-specific bird-habitat associations to highlight the importance of semi-natural habitat features and vineyard ground vegetation cover throughout the year. Given that avian habitat selection varies according to taxa, guilds and spatial scale, we modelled bird-habitat associations in all months at two spatial scales using mixed effects regression models. At the landscape scale, birds were recorded along 10 1-km long transects in Southwestern Switzerland (February 2014 -January 2015. At the field scale, we compared the characteristics of visited and unvisited vineyard fields (hereafter called parcels. Bird abundance in vineyards tripled in winter compared to summer. Vineyards surrounded by a greater amount of hedges and small woods harboured higher bird abundance, species richness and diversity, especially during the winter season. Regarding ground vegetation, birds showed a season-specific habitat selection pattern, notably a marked preference for ground-vegetated parcels in winter and for intermediate vegetation cover in spring and summer. These season-specific preferences might be related to species-specific life histories: more insectivorous, ground-foraging species occur during the breeding season whereas granivores predominate in winter. These results highlight the importance of

  16. Spatial and temporal constraints on regional-scale groundwater flow in the Pampa del Tamarugal Basin, Atacama Desert, Chile

    Jayne, Richard S.; Pollyea, Ryan M.; Dodd, Justin P.; Olson, Elizabeth J.; Swanson, Susan K.

    2016-12-01

    Aquifers within the Pampa del Tamarugal Basin (Atacama Desert, northern Chile) are the sole source of water for the coastal city of Iquique and the economically important mining industry. Despite this, the regional groundwater system remains poorly understood. Although it is widely accepted that aquifer recharge originates as precipitation in the Altiplano and Andean Cordillera to the east, there remains debate on whether recharge is driven primarily by near-surface groundwater flow in response to periodic flood events or by basal groundwater flux through deep-seated basin fractures. In addressing this debate, the present study quantifies spatial and temporal variability in regional-scale groundwater flow paths at 20.5°S latitude by combining a two-dimensional model of groundwater and heat flow with field observations and δ18O isotope values in surface water and groundwater. Results suggest that both previously proposed aquifer recharge mechanisms are likely influencing aquifers within the Pampa del Tamarugal Basin; however, each mechanism is operating on different spatial and temporal scales. Storm-driven flood events in the Altiplano readily transmit groundwater to the eastern Pampa del Tamarugal Basin through near-surface groundwater flow on short time scales, e.g., 100-101 years, but these effects are likely isolated to aquifers in the eastern third of the basin. In addition, this study illustrates a physical mechanism for groundwater originating in the eastern highlands to recharge aquifers and salars in the western Pampa del Tamarugal Basin over timescales of 104-105 years.

  17. Thermal infrared imagery as a tool for analysing the variability of surface saturated areas at various temporal and spatial scales

    Glaser, Barbara; Antonelli, Marta; Pfister, Laurent; Klaus, Julian

    2017-04-01

    Surface saturated areas are important for the on- and offset of hydrological connectivity within the hillslope-riparian-stream continuum. This is reflected in concepts such as variable contributing areas or critical source areas. However, we still lack a standardized method for areal mapping of surface saturation and for observing its spatiotemporal variability. Proof-of-concept studies in recent years have shown the potential of thermal infrared (TIR) imagery to record surface saturation dynamics at various temporal and spatial scales. Thermal infrared imagery is thus a promising alternative to conventional approaches, such as the squishy boot method or the mapping of vegetation. In this study we use TIR images to investigate the variability of surface saturated areas at different temporal and spatial scales in the forested Weierbach catchment (0.45 km2) in western Luxembourg. We took TIR images of the riparian zone with a hand-held FLIR infrared camera at fortnightly intervals over 18 months at nine different locations distributed over the catchment. Not all of the acquired images were suitable for a derivation of the surface saturated areas, as various factors influence the usability of the TIR images (e.g. temperature contrasts, shadows, fog). Nonetheless, we obtained a large number of usable images that provided a good insight into the dynamic behaviour of surface saturated areas at different scales. The images revealed how diverse the evolution of surface saturated areas can be throughout the hydrologic year. For some locations with similar morphology or topography we identified diverging saturation dynamics, while other locations with different morphology / topography showed more similar behaviour. Moreover, we were able to assess the variability of the dynamics of expansion / contraction of saturated areas within the single locations, which can help to better understand the mechanisms behind surface saturation development.

  18. SWANS: A Prototypic SCALE Criticality Sequence for Automated Optimization Using the SWAN Methodology

    Greenspan, E.

    2001-01-11

    SWANS is a new prototypic analysis sequence that provides an intelligent, semi-automatic search for the maximum k{sub eff} of a given amount of specified fissile material, or of the minimum critical mass. It combines the optimization strategy of the SWAN code with the composition-dependent resonance self-shielded cross sections of the SCALE package. For a given system composition arrived at during the iterative optimization process, the value of k{sub eff} is as accurate and reliable as obtained using the CSAS1X Sequence of SCALE-4.4. This report describes how SWAN is integrated within the SCALE system to form the new prototypic optimization sequence, describes the optimization procedure, provides a user guide for SWANS, and illustrates its application to five different types of problems. In addition, the report illustrates that resonance self-shielding might have a significant effect on the maximum k{sub eff} value a given fissile material mass can have.

  19. Large-Scale Spatio-Temporal Patterns of Mediterranean Cephalopod Diversity.

    Stefanie Keller

    Full Text Available Species diversity is widely recognized as an important trait of ecosystems' functioning and resilience. Understanding the causes of diversity patterns and their interaction with the environmental conditions is essential in order to effectively assess and preserve existing diversity. While diversity patterns of most recurrent groups such as fish are commonly studied, other important taxa such as cephalopods have received less attention. In this work we present spatio-temporal trends of cephalopod diversity across the entire Mediterranean Sea during the last 19 years, analysing data from the annual bottom trawl survey MEDITS conducted by 5 different Mediterranean countries using standardized gears and sampling protocols. The influence of local and regional environmental variability in different Mediterranean regions is analysed applying generalized additive models, using species richness and the Shannon Wiener index as diversity descriptors. While the western basin showed a high diversity, our analyses do not support a steady eastward decrease of diversity as proposed in some previous studies. Instead, high Shannon diversity was also found in the Adriatic and Aegean Seas, and high species richness in the eastern Ionian Sea. Overall diversity did not show any consistent trend over the last two decades. Except in the Adriatic Sea, diversity showed a hump-shaped trend with depth in all regions, being highest between 200-400 m depth. Our results indicate that high Chlorophyll a concentrations and warmer temperatures seem to enhance species diversity, and the influence of these parameters is stronger for richness than for Shannon diversity.

  20. Lead Pollution Remanence in an Urban River System: A multi-scale temporal and spatial study

    Ayrault S.

    2013-04-01

    Full Text Available This work aims at studying the fate of sediments contaminated with tetraethyl Pb from leaded gasoline using a two-dimension upscaling approach, from a small urban subcatchment, the Orge River (900 km2 to the whole Seine River basin (64700 km2, in France. In France, the leaded gasoline reduction started in 1986 and leaded gasoline was completely banned after 2000. This work aims at assessing whether the ban of leaded gasoline is related to changes in Pb contamination sources of these river suspended sediment particles (SPM and bed sediment. Sediment cores and samples collected in the course of previous research projects of the Seine River contamination were used as temporal archives. The study of the isotopic lead ratio showed the fast decrease of the contamination of urban river suspended particulate matter due to the “gasoline” lead source from 2000 to 2011. This source mostly disappeared in the SPM from the Seine River basin that includes urban areas but also agricultural and industrial activities. Nevertheless, it is still present in the small urban catchment of the Orge River. The results on bed sediments showed a different pattern, where the “gasoline” source is still active in densely populated areas, either in the Seine River in the 20 km downstream Paris, or along the Orge River.

  1. The Resilience of Coral Reefs Across a Hierarchy of Spatial and Temporal scales

    Mumby, P. J.

    2016-02-01

    Resilience is a dynamical property of ecosystems that integrates processes of recovery, disturbance and internal dynamics, including reinforcing feedbacks. As such, resilience is a useful framework to consider how ecosystems respond to multiple drivers occurring over multiple scales. Many insights have emerged recently including the way in which stressors can combine synergistically to deplete resilience. However, while recent advances have mapped resilience across seascapes, most studies have not captured emergent spatial dependencies and dynamics across the seascape (e.g., independent box models are run across the seascape in isolation). Here, we explore the dynamics that emerge when the seascape is `wired up' using data on larval dispersal, thereby giving a fully spatially-realistic model. We then consider how dynamics change across even larger, biogeographic scales, posing the question, `are there robust and global "rules of thumb" for the resilience of a single ecosystem?'. Answers to this question will help managers tailor their interventions and research needs for their own jurisdiction.

  2. Disentangling woodland caribou movements in response to clearcuts and roads across temporal scales.

    David Beauchesne

    Full Text Available Although prey species typically respond to the most limiting factors at coarse spatiotemporal scales while addressing biological requirements at finer scales, such behaviour may become challenging for species inhabiting human altered landscapes. We investigated how woodland caribou, a threatened species inhabiting North-American boreal forests, modified their fine-scale movements when confronted with forest management features (i.e. clearcuts and roads. We used GPS telemetry data collected between 2004 and 2010 on 49 female caribou in a managed area in Québec, Canada. Movements were studied using a use--availability design contrasting observed steps (i.e. line connecting two consecutive locations with random steps (i.e. proxy of immediate habitat availability. Although caribou mostly avoided disturbances, individuals nonetheless modulated their fine-scale response to disturbances on a daily and annual basis, potentially compromising between risk avoidance in periods of higher vulnerability (i.e. calving, early and late winter during the day and foraging activities in periods of higher energy requirements (i.e. spring, summer and rut during dusk/dawn and at night. The local context in which females moved was shown to influence their decision to cross clearcut edges and roads. Indeed, although females typically avoided crossing clearcut edges and roads at low densities, crossing rates were found to rapidly increase in greater disturbance densities. In some instance, however, females were less likely to cross edges and roads as densities increased. Females may then be trapped and forced to use disturbed habitats, known to be associated with higher predation risk. We believe that further increases in anthropogenic disturbances could exacerbate such behavioural responses and ultimately lead to population level consequences.

  3. Anthropogenic Effects on Forest Ecosystems at Various Spatio-Temporal Scales

    Michael Bredemeier

    2002-01-01

    Full Text Available The focus in this review of long-term effects on forest ecosystems is on human impact. As a classification of this differentiated and complex matter, three domains of long-term effects with different scales in space and time are distinguished: 1- Exploitation and conversion history of forests in areas of extended human settlement 2- Long-range air pollution and acid deposition in industrialized regions 3- Current global loss of forests and soil degradation.

  4. Temporal and Latitudinal Variations of the Length-Scales and Relative Intensities of the Chromospheric Network

    Raju, K. P.

    2018-05-01

    The Calcium K spectroheliograms of the Sun from Kodaikanal have a data span of about 100 years and covers over 9 solar cycles. The Ca line is a strong chromospheric line dominated by chromospheric network and plages which are good indicators of solar activity. Length-scales and relative intensities of the chromospheric network have been obtained in the solar latitudes from 50 degree N to 50 degree S from the spectroheliograms. The length-scale was obtained from the half-width of the two-dimensional autocorrelation of the latitude strip which gives a measure of the width of the network boundary. As reported earlier for the transition region extreme ultraviolet (EUV) network, relative intensity and width of the chromospheric network boundary are found to be dependent on the solar cycle. A varying phase difference has been noticed in the quantities in different solar latitudes. A cross-correlation analysis of the quantities from other latitudes with ±30 degree latitude revealed an interesting phase difference pattern indicating flux transfer. Evidence of equatorward flux transfer has been observed. The average equatorward flux transfer was estimated to be 5.8 ms-1. The possible reasons of the drift could be meridional circulation, torsional oscillations, or the bright point migration. Cross-correlation of intensity and length-scale from the same latitude showed increasing phase difference with increasing latitude. We have also obtained the cross correlation of the quantities across the equator to see the possible phase lags in the two hemispheres. Signatures of lags are seen in the length scales of southern hemisphere near the equatorial latitudes, but no such lags in the intensity are observed. The results have important implications on the flux transfer over the solar surface and hence on the solar activity and dynamo.

  5. Disentangling woodland caribou movements in response to clearcuts and roads across temporal scales.

    Beauchesne, David; Jaeger, Jochen Ag; St-Laurent, Martin-Hugues

    2013-01-01

    Although prey species typically respond to the most limiting factors at coarse spatiotemporal scales while addressing biological requirements at finer scales, such behaviour may become challenging for species inhabiting human altered landscapes. We investigated how woodland caribou, a threatened species inhabiting North-American boreal forests, modified their fine-scale movements when confronted with forest management features (i.e. clearcuts and roads). We used GPS telemetry data collected between 2004 and 2010 on 49 female caribou in a managed area in Québec, Canada. Movements were studied using a use--availability design contrasting observed steps (i.e. line connecting two consecutive locations) with random steps (i.e. proxy of immediate habitat availability). Although caribou mostly avoided disturbances, individuals nonetheless modulated their fine-scale response to disturbances on a daily and annual basis, potentially compromising between risk avoidance in periods of higher vulnerability (i.e. calving, early and late winter) during the day and foraging activities in periods of higher energy requirements (i.e. spring, summer and rut) during dusk/dawn and at night. The local context in which females moved was shown to influence their decision to cross clearcut edges and roads. Indeed, although females typically avoided crossing clearcut edges and roads at low densities, crossing rates were found to rapidly increase in greater disturbance densities. In some instance, however, females were less likely to cross edges and roads as densities increased. Females may then be trapped and forced to use disturbed habitats, known to be associated with higher predation risk. We believe that further increases in anthropogenic disturbances could exacerbate such behavioural responses and ultimately lead to population level consequences.

  6. Temporal and spatial scaling of the genetic structure of a vector-borne plant pathogen.

    Coletta-Filho, Helvécio D; Francisco, Carolina S; Almeida, Rodrigo P P

    2014-02-01

    The ecology of plant pathogens of perennial crops is affected by the long-lived nature of their immobile hosts. In addition, changes to the genetic structure of pathogen populations may affect disease epidemiology and management practices; examples include local adaptation of more fit genotypes or introduction of novel genotypes from geographically distant areas via human movement of infected plant material or insect vectors. We studied the genetic structure of Xylella fastidiosa populations causing disease in sweet orange plants in Brazil at multiple scales using fast-evolving molecular markers (simple-sequence DNA repeats). Results show that populations of X. fastidiosa were regionally isolated, and that isolation was maintained for populations analyzed a decade apart from each other. However, despite such geographic isolation, local populations present in year 2000 were largely replaced by novel genotypes in 2009 but not as a result of migration. At a smaller spatial scale (individual trees), results suggest that isolates within plants originated from a shared common ancestor. In summary, new insights on the ecology of this economically important plant pathogen were obtained by sampling populations at different spatial scales and two different time points.

  7. Xanthophyll Cycle In Chromophyte Algae: Variations Over Different Temporal and Space Scales and Their Ecological Implications.

    Brunet, C.

    As a response to excess light, algae present photoprotective reactions, resulting in a re- duction of the light harvesting efficiency. One of these reactions involves the so-called xanthophyll-cycle between diadinoxanthin (Dd) and diatoxanthin (Dt) pigments in chlc-containing brown algae, the latter acting as photoprotective avoiding photooxy- dation of LHC. Presence and concentrations of these two xanthophylls are valuable indicators of the light history of algae in the natural environment and can be used to obtain ecological information at different time and space scales. Data are presented from the Mediterranean Sea and the English Channel. At mesoscale, significant rela- tionships between Dt and Dd and physical (light, salinity) or biological (Fv/Fm ratio) data can be drawn, suggesting that they strictly reflect water mass characteristics and behavior. In the Gulf of Naples (Med. Sea), from vertical profiles of photoadaptative index (ratio between Dt and Dd), we can estimate a mixing rate of 0.07 cm.sec-1 in the upper layer. From this velocity, we are able to infer kinetic coefficients for different photophysiological parameters reacting over different time scales within the mixed layer. At the diel scale, this photoadaptative index follows significant oscillations in the upper water column, and equations are found expressing them as function of light and time. Also in this case, mixing rates are estimated, lying around 0.05 cm.sec-1.

  8. Combining agreement and frequency rating scales to optimize psychometrics in measuring behavioral health functioning.

    Marfeo, Elizabeth E; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K; Jette, Alan M

    2014-07-01

    The goal of this article was to investigate optimal functioning of using frequency vs. agreement rating scales in two subdomains of the newly developed Work Disability Functional Assessment Battery: the Mood & Emotions and Behavioral Control scales. A psychometric study comparing rating scale performance embedded in a cross-sectional survey used for developing a new instrument to measure behavioral health functioning among adults applying for disability benefits in the United States was performed. Within the sample of 1,017 respondents, the range of response category endorsement was similar for both frequency and agreement item types for both scales. There were fewer missing values in the frequency items than the agreement items. Both frequency and agreement items showed acceptable reliability. The frequency items demonstrated optimal effectiveness around the mean ± 1-2 standard deviation score range; the agreement items performed better at the extreme score ranges. Findings suggest an optimal response format requires a mix of both agreement-based and frequency-based items. Frequency items perform better in the normal range of responses, capturing specific behaviors, reactions, or situations that may elicit a specific response. Agreement items do better for those whose scores are more extreme and capture subjective content related to general attitudes, behaviors, or feelings of work-related behavioral health functioning. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

  10. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  11. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  12. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  13. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  14. A fast and optimized dynamic economic load dispatch for large scale power systems

    Musse Mohamud Ahmed; Mohd Ruddin Ab Ghani; Ismail Hassan

    2000-01-01

    This paper presents Lagrangian Multipliers (LM) and Linear Programming (LP) based dynamic economic load dispatch (DELD) solution for large-scale power system operations. It is to minimize the operation cost of power generation. units subject to the considered constraints. After individual generator units are economically loaded and periodically dispatched, fast and optimized DELD has been achieved. DELD with period intervals has been taken into consideration The results found from the algorithm based on LM and LP techniques appear to be modest in both optimizing the operation cost and achieving fast computation. (author)

  15. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  16. Optimizing spatial and temporal constraints for cropland canopy water content retrieval through coupled radiative transfer model inversion

    Boren, E. J.; Boschetti, L.; Johnson, D.

    2017-12-01

    Water plays a critical role in all plant physiological processes, including transpiration, photosynthesis, nutrient transportation, and maintenance of proper plant cell functions. Deficits in water content cause drought-induced stress conditions, such as constrained plant growth and cellular metabolism, while overabundance of water cause anoxic conditions which limit plant physiological processes and promote disease. Vegetation water content maps can provide agricultural producers key knowledge for improving production capacity and resiliency in agricultural systems while facilitating the ability to pinpoint, monitor, and resolve water scarcity issues. Radiative transfer model (RTM) inversion has been successfully applied to remotely sensed data to retrieve biophysical and canopy parameter estimates, including water content. The successful launch of the Landsat 8 Operational Land Imager (OLI) in 2012, Sentinel 2A Multispectral Instrument (MSI) in 2015, followed by Sentinel 2B in 2017, the systematic acquisition schedule and free data distribution policy provide the opportunity for water content estimation at a spatial and temporal scale that can meet the demands of potential operational users: combined, these polar-orbiting systems provide 10 m to 30 m multi-spectral global coverage up to every 3 days. The goal of the present research is to prototype the generation of a cropland canopy water content product, obtained from the newly developed Landsat 8 and Sentinel 2 atmospherically corrected HLS product, through the inversion of the leaf and canopy model PROSAIL5B. We assess the impact of a novel spatial and temporal stratification, where some parameters of the model are constrained by crop type and phenological phase, based on ancillary biophysical data, collected from various crop species grown in a controlled setting and under different water stress conditions. Canopy-level data, collected coincidently with satellite overpasses during four summer field campaigns

  17. Radiation dose optimization in pediatric temporal bone computed tomography: influence of tube tension on image contrast and image quality.

    Nauer, Claude Bertrand; Zubler, Christoph; Weisstanner, Christian; Stieger, Christof; Senn, Pascal; Arnold, Andreas

    2012-03-01

    The purpose of this experimental study was to investigate the effect of tube tension reduction on image contrast and image quality in pediatric temporal bone computed tomography (CT). Seven lamb heads with infant-equivalent sizes were scanned repeatedly, using four tube tensions from 140 to 80 kV while the CT-Dose Index (CTDI) was held constant. Scanning was repeated with four CTDI values from 30 to 3 mGy. Image contrast was calculated for the middle ear as the Hounsfield unit (HU) difference between bone and air and for the inner ear as the HU difference between bone and fluid. The influence of tube tension on high-contrast detail delineation was evaluated using a phantom. The subjective image quality of eight middle and inner ear structures was assessed using a 4-point scale (scores 1-2 = insufficient; scores 3-4 = sufficient). Middle and inner ear contrast showed a near linear increase with tube tension reduction (r = -0.94/-0.88) and was highest at 80 kV. Tube tension had no influence on spatial resolution. Subjective image quality analysis showed significantly better scoring at lower tube tensions, with highest image quality at 80 kV. However, image quality improvement was most relevant for low-dose scans. Image contrast in the temporal bone is significantly higher at low tube tensions, leading to a better subjective image quality. Highest contrast and best quality were found at 80 kV. This image quality improvement might be utilized to further reduce the radiation dose in pediatric low-dose CT protocols.

  18. Radiation dose optimization in pediatric temporal bone computed tomography: influence of tube tension on image contrast and image quality

    Nauer, Claude Bertrand; Zubler, Christoph; Weisstanner, Christian; Stieger, Christof; Senn, Pascal; Arnold, Andreas

    2012-01-01

    The purpose of this experimental study was to investigate the effect of tube tension reduction on image contrast and image quality in pediatric temporal bone computed tomography (CT). Seven lamb heads with infant-equivalent sizes were scanned repeatedly, using four tube tensions from 140 to 80 kV while the CT-Dose Index (CTDI) was held constant. Scanning was repeated with four CTDI values from 30 to 3 mGy. Image contrast was calculated for the middle ear as the Hounsfield unit (HU) difference between bone and air and for the inner ear as the HU difference between bone and fluid. The influence of tube tension on high-contrast detail delineation was evaluated using a phantom. The subjective image quality of eight middle and inner ear structures was assessed using a 4-point scale (scores 1-2 = insufficient; scores 3-4 = sufficient). Middle and inner ear contrast showed a near linear increase with tube tension reduction (r = -0.94/-0.88) and was highest at 80 kV. Tube tension had no influence on spatial resolution. Subjective image quality analysis showed significantly better scoring at lower tube tensions, with highest image quality at 80 kV. However, image quality improvement was most relevant for low-dose scans. Image contrast in the temporal bone is significantly higher at low tube tensions, leading to a better subjective image quality. Highest contrast and best quality were found at 80 kV. This image quality improvement might be utilized to further reduce the radiation dose in pediatric low-dose CT protocols. (orig.)

  19. Optimization and scale up of microfluidic nanolipomer production method for preclinical and potential clinical trials.

    Gdowski, Andrew; Johnson, Kaitlyn; Shah, Sunil; Gryczynski, Ignacy; Vishwanatha, Jamboor; Ranjan, Amalendu

    2018-02-12

    The process of optimization and fabrication of nanoparticle synthesis for preclinical studies can be challenging and time consuming. Traditional small scale laboratory synthesis techniques suffer from batch to batch variability. Additionally, the parameters used in the original formulation must be re-optimized due to differences in fabrication techniques for clinical production. Several low flow microfluidic synthesis processes have been reported in recent years for developing nanoparticles that are a hybrid between polymeric nanoparticles and liposomes. However, use of high flow microfluidic synthetic techniques has not been described for this type of nanoparticle system, which we will term as nanolipomer. In this manuscript, we describe the successful optimization and functional assessment of nanolipomers fabricated using a microfluidic synthesis method under high flow parameters. The optimal total flow rate for synthesis of these nanolipomers was found to be 12 ml/min and flow rate ratio 1:1 (organic phase: aqueous phase). The PLGA polymer concentration of 10 mg/ml and a DSPE-PEG lipid concentration of 10% w/v provided optimal size, PDI and stability. Drug loading and encapsulation of a representative hydrophobic small molecule drug, curcumin, was optimized and found that high encapsulation efficiency of 58.8% and drug loading of 4.4% was achieved at 7.5% w/w initial concentration of curcumin/PLGA polymer. The final size and polydispersity index of the optimized nanolipomer was 102.11 nm and 0.126, respectively. Functional assessment of uptake of the nanolipomers in C4-2B prostate cancer cells showed uptake at 1 h and increased uptake at 24 h. The nanolipomer was more effective in the cell viability assay compared to free drug. Finally, assessment of in vivo retention in mice of these nanolipomers revealed retention for up to 2 h and were completely cleared at 24 h. In this study, we have demonstrated that a nanolipomer formulation can be successfully

  20. Ice nucleating particles from a large-scale sampling network: insight into geographic and temporal variability

    Schrod, Jann; Weber, Daniel; Thomson, Erik S.; Pöhlker, Christopher; Saturno, Jorge; Artaxo, Paulo; Curtius, Joachim; Bingemer, Heinz

    2017-04-01

    The number concentration of ice nucleating particles (INP) is an important, yet under quantified atmospheric parameter. The temporal and geographic extent of observations worldwide remains relatively small, with many regions of the world (even whole continents and oceans), almost completely unrepresented by observational data. Measurements at pristine sites are particularly rare, but all the more valuable because such observations are necessary to estimate the pre-industrial baseline of aerosol and cloud related parameters that are needed to better understand the climate system and forecast future scenarios. As a partner of BACCHUS we began in September 2014 to operate an INP measurement network of four sampling stations, with a global geographic distribution. The stations are located at unique sites reaching from the Arctic to the equator: the Amazonian Tall Tower Observatory ATTO in Brazil, the Observatoire Volcanologique et Sismologique on the island of Martinique in the Caribbean Sea, the Zeppelin Observatory at Svalbard in the Norwegian Arctic and the Taunus Observatory near Frankfurt, Germany. Since 2014 samples were collected regularly by electrostatic precipitation of aerosol particles onto silicon substrates. The INP on the substrate are activated and analyzed in the isothermal static diffusion chamber FRIDGE at temperatures between -20°C and -30°C and relative humidity with respect to ice from 115 to 135%. Here we present data from the years 2015 and 2016 from this novel INP network and from selected campaign-based measurements from remote sites, including the Mt. Kenya GAW station. Acknowledgements The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) project BACCHUS under grant agreement No 603445 and the Deutsche Forschungsgemeinschaft (DFG) under the Research Unit FOR 1525 (INUIT).

  1. Small scale temporal distribution of radiocesium in undisturbed coniferous forest soil: Radiocesium depth distribution profiles.

    Teramage, Mengistu T; Onda, Yuichi; Kato, Hiroaki

    2016-04-01

    The depth distribution of pre-Fukushima and Fukushima-derived (137)Cs in undisturbed coniferous forest soil was investigated at four sampling dates from nine months to 18 months after the Fukushima nuclear power plant accident. The migration rate and short-term temporal variability among the sampling profiles were evaluated. Taking the time elapsed since the peak deposition of pre-Fukushima (137)Cs and the median depth of the peaks, its downward displacement rates ranged from 0.15 to 0.67 mm yr(-1) with a mean of 0.46 ± 0.25 mm yr(-1). On the other hand, in each examined profile considerable amount of the Fukushima-derived (137)Cs was found in the organic layer (51%-92%). At this moment, the effect of time-distance on the downward distribution of Fukushima-derived (137)Cs seems invisible as its large portion is still found in layers where organic matter is maximal. This indicates that organic matter seems the primary and preferential sorbent of radiocesium that could be associated with the physical blockage of the exchanging sites by organic-rich dusts that act as a buffer against downward propagation of radiocesium, implying radiocesium to be remained in the root zone for considerable time period. As a result, this soil section can be a potential source of radiation dose largely due to high radiocesium concentration coupled with its low density. Generally, such kind of information will be useful to establish a dynamic safety-focused decision support system to ease and assist management actions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Global-Scale Associations of Vegetation Phenology with Rainfall and Temperature at a High Spatio-Temporal Resolution

    Nicholas Clinton

    2014-08-01

    Full Text Available Phenology response to climatic variables is a vital indicator for understanding changes in biosphere processes as related to possible climate change. We investigated global phenology relationships to precipitation and land surface temperature (LST at high spatial and temporal resolution for calendar years 2008–2011. We used cross-correlation between MODIS Enhanced Vegetation Index (EVI, MODIS LST and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN gridded rainfall to map phenology relationships at 1-km spatial resolution and weekly temporal resolution. We show these data to be rich in spatiotemporal information, illustrating distinct phenology patterns as a result of complex overlapping gradients of climate, ecosystem and land use/land cover. The data are consistent with broad-scale, coarse-resolution modeled ecosystem limitations to moisture, temperature and irradiance. We suggest that high-resolution phenology data are useful as both an input and complement to land use/land cover classifiers and for understanding climate change vulnerability in natural and anthropogenic landscapes.

  3. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  4. Modelling of spatio-temporal precipitation relevant for urban hydrology with focus on scales, extremes and climate change

    Sørup, Hjalte Jomo Danielsen

    -correlation lengths for sub-daily extreme precipitation besides having too low intensities. Especially the wrong spatial correlation structure is disturbing from an urban hydrological point of view as short-term extremes will cover too much ground if derived directly from bias corrected regional climate model output...... of precipitation are compared and used to rank climate models with respect to performance metrics. The four different observational data sets themselves are compared at daily temporal scale with respect to climate indices for mean and extreme precipitation. Data density seems to be a crucial parameter for good...... happening in summer and most of the daily extremes in fall. This behaviour is in good accordance with reality where short term extremes originate in convective precipitation cells that occur when it is very warm and longer term extremes originate in frontal systems that dominate the fall and winter seasons...

  5. Examining diseased states in a scaled-up vocal fold model using simultaneous temporally resolved DPIV and pressure measurements

    Rogers, Dylan; Wei, Nathaniel; Ringenber, Hunter; Krane, Michael; Wei, Timothy

    2017-11-01

    This study builds on the parallel presentation of Ringenberg, et al. (APS-DFD 2017) involving simultaneous, temporally and spatially resolved flow and pressure measurements in a scaled-up vocal fold model. In this talk, data from experiments replicating characteristics of diseased vocal folds are presented. This begins with vocal folds that do not fully close and continues with asymmetric oscillations. Data are compared to symmetric, i.e. `healthy', oscillatory motions presented in the companion talk. Having pressure and flow data for individual as well as phase averaged oscillations for these diseased cases highlights the potential for aeroacoustic analysis in this complex system. Supported by NIH Grant No. 2R01 DC005642-11.

  6. Local-scaling density-functional method: Intraorbit and interorbit density optimizations

    Koga, T.; Yamamoto, Y.; Ludena, E.V.

    1991-01-01

    The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function ρ(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density ρ(r) and the N-electron wave function Ψ({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level

  7. Spatio-temporal Characteristics of Land Use Land Cover Change Driven by Large Scale Land Transactions in Cambodia

    Ghosh, A.; Smith, J. C.; Hijmans, R. J.

    2017-12-01

    Since mid-1990s, the Cambodian government granted nearly 300 `Economic Land Concessions' (ELCs), occupying approximately 2.3 million ha to foreign and domestic organizations (primarily agribusinesses). The majority of Cambodian ELC deals have been issued in areas of both relatively low population density and low agricultural productivity, dominated by smallholder production. These regions often contain highly biodiverse areas, thereby increasing the ecological cost associated with land clearing for extractive purposes. These large-scale land transactions have also resulted in substantial and rapid changes in land-use patterns and agriculture practices by smallholder farmers. In this study, we investigated the spatio-temporal characteristics of land use change associated with large-scale land transactions across Cambodia using multi-temporal multi-reolution remote sensing data. We identified major regions of deforestation during the last two decades using Landsat archive, global forest change data (2000-2014) and georeferenced database of ELC deals. We then mapped the deforestation and land clearing within ELC boundaries as well as areas bordering or near ELCs to quantify the impact of ELCs on local communities. Using time-series from MODIS Vegetation Indices products for the study period, we also estimated the time period over which any particular ELC deal initiated its proposed activity. We found evidence of similar patterns of land use change outside the boundaries of ELC deals which may be associated with i) illegal land encroachments by ELCs and/or ii) new agricultural practices adopted by local farmers near ELC boundaries. We also detected significant time gaps between ELC deal granting dates and initiation of land clearing for ELC purposes. Interestingly, we also found that not all designated areas for ELCs were put into effect indicating the possible proliferation of speculative land deals. This study demonstrates the potential of remote sensing techniques

  8. Comparison of HSPF and PRMS model simulated flows using different temporal and spatial scales in the Black Hills, South Dakota

    Chalise, D. R.; Haj, Adel E.; Fontaine, T.A.

    2018-01-01

    The hydrological simulation program Fortran (HSPF) [Hydrological Simulation Program Fortran version 12.2 (Computer software). USEPA, Washington, DC] and the precipitation runoff modeling system (PRMS) [Precipitation Runoff Modeling System version 4.0 (Computer software). USGS, Reston, VA] models are semidistributed, deterministic hydrological tools for simulating the impacts of precipitation, land use, and climate on basin hydrology and streamflow. Both models have been applied independently to many watersheds across the United States. This paper reports the statistical results assessing various temporal (daily, monthly, and annual) and spatial (small versus large watershed) scale biases in HSPF and PRMS simulations using two watersheds in the Black Hills, South Dakota. The Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (r">rr), and coefficient of determination (R2">R2R2) statistics for the daily, monthly, and annual flows were used to evaluate the models’ performance. Results from the HSPF models showed that the HSPF consistently simulated the annual flows for both large and small basins better than the monthly and daily flows, and the simulated flows for the small watershed better than flows for the large watershed. In comparison, the PRMS model results show that the PRMS simulated the monthly flows for both the large and small watersheds better than the daily and annual flows, and the range of statistical error in the PRMS models was greater than that in the HSPF models. Moreover, it can be concluded that the statistical error in the HSPF and the PRMSdaily, monthly, and annual flow estimates for watersheds in the Black Hills was influenced by both temporal and spatial scale variability.

  9. Cost efficiency and optimal scale of electricity distribution firms in Taiwan: An application of metafrontier analysis

    Huang, Y.-J.; Chen, K.-H.; Yang, C.-H.

    2010-01-01

    This paper analyzes the cost efficiency and optimal scale of Taiwan's electricity distribution industry. Due to the substantial difference in network density, firms may differ widely in production technology. We employ the stochastic metafrontier approach to estimate the cost efficiency of 24 distribution units during the period 1997-2002. Empirical results find that the average cost efficiency is overestimated using the traditional stochastic frontier model, especially for low density regions. The average cost efficiency of the high density group is significantly higher than that of the low density group as it benefits from network economies. This study also calculates both short-term and long-term optimal scales of electricity distribution firms, lending policy implications for the deregulation of the electricity distribution industry.

  10. Optimizing basin-scale coupled water quantity and water quality management with stochastic dynamic programming

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo

    2015-01-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth......-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water...... quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen...

  11. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  12. Estimating temporal changes in soil carbon stocks at ecoregional scale in Madagascar using remote-sensing

    Grinand, C.; Maire, G. Le; Vieilledent, G.; Razakamanarivo, H.; Razafimbelo, T.; Bernoux, M.

    2017-02-01

    Soil organic carbon (SOC) plays an important role in climate change regulation notably through release of CO2 following land use change such a deforestation, but data on stock change levels are lacking. This study aims to empirically assess SOC stocks change between 1991 and 2011 at the landscape scale using easy-to-access spatially-explicit environmental factors. The study area was located in southeast Madagascar, in a region that exhibits very high rate of deforestation and which is characterized by both humid and dry climates. We estimated SOC stock on 0.1 ha plots for 95 different locations in a 43,000 ha reference area covering both dry and humid conditions and representing different land cover including natural forest, cropland, pasture and fallows. We used the Random Forest algorithm to find out the environmental factors explaining the spatial distribution of SOC. We then predicted SOC stocks for two soil layers at 30 cm and 100 cm over a wider area of 395,000 ha. By changing the soil and vegetation indices derived from remote sensing images we were able to produce SOC maps for 1991 and 2011. Those estimates and their related uncertainties where combined in a post-processing step to map estimates of significant SOC variations and we finally compared the SOC change map with published deforestation maps. Results show that the geologic variables, precipitation, temperature, and soil-vegetation status were strong predictors of SOC distribution at regional scale. We estimated an average net loss of 10.7% and 5.2% for the 30 cm and the 100 cm layers respectively for deforested areas in the humid area. Our results also suggest that these losses occur within the first five years following deforestation. No significant variations were observed for the dry region. This study provides new solutions and knowledge for a better integration of soil threats and opportunities in land management policies.

  13. Chinese version of the Optimism and Pessimism Scale: Psychometric properties in mainland China and development of a short form.

    Xia, Jie; Wu, Daxing; Zhang, Jibiao; Xu, Yuanchao; Xu, Yunxuan

    2016-06-01

    This study aimed to validate the Chinese version of the Optimism and Pessimism Scale in a sample of 730 adult Chinese individuals. Confirmatory factor analyses confirmed the bidimensionality of the scale with two factors, optimism and pessimism. The total scale and optimism and pessimism factors demonstrated satisfactory reliability and validity. Population-based normative data and mean values for gender, age, and education were determined. Furthermore, we developed a 20-item short form of the Chinese version of the Optimism and Pessimism Scale with structural validity comparable to the full form. In summary, the Chinese version of the Optimism and Pessimism Scale is an appropriate and practical tool for epidemiological research in mainland China. © The Author(s) 2014.

  14. Optimizing fermentation process miscanthus-to-ethanol biorefinery scale under uncertain conditions

    Bomberg, Matthew; Sanchez, Daniel L; Lipman, Timothy E

    2014-01-01

    Ethanol produced from cellulosic feedstocks has garnered significant interest for greenhouse gas abatement and energy security promotion. One outstanding question in the development of a mature cellulosic ethanol industry is the optimal scale of biorefining activities. This question is important for companies and entrepreneurs seeking to construct and operate cellulosic ethanol biorefineries as it determines the size of investment needed and the amount of feedstock for which they must contract. The question also has important implications for the nature and location of lifecycle environmental impacts from cellulosic ethanol. We use an optimization framework similar to previous studies, but add richer details by treating many of these critical parameters as random variables and incorporating a stochastic sub-model for land conversion. We then use Monte Carlo simulation to obtain a probability distribution for the optimal scale of a biorefinery using a fermentation process and miscanthus feedstock. We find a bimodal distribution with a high peak at around 10–30 MMgal yr −1 (representing circumstances where a relatively low percentage of farmers elect to participate in miscanthus cultivation) and a lower and flatter peak between 150 and 250 MMgal yr −1 (representing more typically assumed land-conversion conditions). This distribution leads to useful insights; in particular, the asymmetry of the distribution—with significantly more mass on the low side—indicates that developers of cellulosic ethanol biorefineries may wish to exercise caution in scale-up. (letters)

  15. Optimizing fermentation process miscanthus-to-ethanol biorefinery scale under uncertain conditions

    Bomberg, Matthew; Sanchez, Daniel L.; Lipman, Timothy E.

    2014-05-01

    Ethanol produced from cellulosic feedstocks has garnered significant interest for greenhouse gas abatement and energy security promotion. One outstanding question in the development of a mature cellulosic ethanol industry is the optimal scale of biorefining activities. This question is important for companies and entrepreneurs seeking to construct and operate cellulosic ethanol biorefineries as it determines the size of investment needed and the amount of feedstock for which they must contract. The question also has important implications for the nature and location of lifecycle environmental impacts from cellulosic ethanol. We use an optimization framework similar to previous studies, but add richer details by treating many of these critical parameters as random variables and incorporating a stochastic sub-model for land conversion. We then use Monte Carlo simulation to obtain a probability distribution for the optimal scale of a biorefinery using a fermentation process and miscanthus feedstock. We find a bimodal distribution with a high peak at around 10-30 MMgal yr-1 (representing circumstances where a relatively low percentage of farmers elect to participate in miscanthus cultivation) and a lower and flatter peak between 150 and 250 MMgal yr-1 (representing more typically assumed land-conversion conditions). This distribution leads to useful insights; in particular, the asymmetry of the distribution—with significantly more mass on the low side—indicates that developers of cellulosic ethanol biorefineries may wish to exercise caution in scale-up.

  16. Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.

    Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin

    2018-02-01

    Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.

  17. Design Optimization for a Truncated Catenary Mooring System for Scale Model Test

    Climent Molins

    2015-11-01

    Full Text Available One of the main aspects when testing floating offshore platforms is the scaled mooring system, particularly with the increased depths where such platforms are intended. The paper proposes the use of truncated mooring systems to emulate the real mooring system by solving an optimization problem. This approach could be an interesting option when the existing testing facilities do not have enough available space. As part of the development of a new spar platform made of concrete for Floating Offshore Wind Turbines (FOWTs, called Windcrete, a station keeping system with catenary shaped lines was selected. The test facility available for the planned experiments had an important width constraint. Then, an algorithm to optimize the design of the scaled truncated mooring system using different weights of lines was developed. The optimization process adjusts the quasi-static behavior of the scaled mooring system as much as possible to the real mooring system within its expected maximum displacement range, where the catenary line provides the restoring forces by its suspended line length.

  18. A modular approach to large-scale design optimization of aerospace systems

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  19. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  20. A New Perspective on Changing Arctic Marine Ecosystems: Panarchy Adaptive Cycles in Pan-Arctic Spatial and Temporal Scales

    Wiese, F. K.; Huntington, H. P.; Carmack, E.; Wassmann, P. F. J.; Leu, E. S.; Gradinger, R.

    2016-02-01

    Changes in the physical/biological interactions in the Arctic are occurring across a variety of spatial and temporal scales and may be mitigated or strengthened based on varying rates of evolutionary adaptation. A novel way to view these interactions and their social relevance is through the systems theory perspective of "Panarchy" proposed by Gunderson and Holling. Panarchy is an interdisciplinary approach in which structures, scales and linkages of complex-adaptive systems, including those of nature (e.g. ocean), humans (e.g. economics), and combined social-ecological systems (e.g. institutions that govern natural resource use), are mapped across multiple space and time scales in continual and interactive adaptive cycles of growth, accumulation, restructuring and renewal. In complex-adaptive systems the dynamics at a given scale are generally dominated by a small number of key internal variables that are forced by one or more external variables. The stability of such a system is characterized by its resilience, i.e. its capacity to absorb disturbance and re-organize while undergoing change, so as to retain essentially similar function, structure, identity and feedbacks. It is in the capacity of a system to cope with pressures and adversities such as exploitation, warming, governance restrictions, competition, etc. that resilience embraces human and natural systems as complex entities continually adapting through cycles of change. In this paper we explore processes at four linked spatial domains in the Arctic Ocean and link it to ecosystem resilience and re-organization characteristics. From this we derive a series of hypotheses concerning the biological responses to future physical changes and suggest ways how Panarchy theory can be applied to observational strategies to help detect early signs of environmental shifts affecting marine system services and functions. We close by discussing possible implications of the Panarchy framework for policy and governance.

  1. Natural spatial and temporal variations in groundwater chemistry in fractured, sedimentary rocks: scale and implications for solute transport

    Hoven, Stephen J. van der; Kip Solomon, D.; Moline, Gerilynn R.

    2005-01-01

    Natural tracers (major ions, δ 18 O, and O 2 ) were monitored to evaluate groundwater flow and transport to a depth of 20 m below the surface in fractured sedimentary (primarily shale and limestone) rocks. Large temporal variations in these tracers were noted in the soil zone and the saprolite, and are driven primarily by individual storm events. During nonstorm periods, an upward flow brings water with high TDS, constant δ 18 O, and low dissolved O 2 to the water table. During storm events, low TDS, variable δ 18 O, and high dissolved O 2 water recharges through the unsaturated zone. These oscillating signals are rapidly transmitted along fracture pathways in the saprolite, with changes occurring on spatial scales of several meters and on a time scale of hours. The variations decreased markedly below the boundary between the saprolite and less weathered bedrock. Variations in the bedrock units occurred on time scales of days and spatial scales of at least 20 m. The oscillations of chemical conditions in the shallow groundwater are hypothesized to have significant implications for solute transport. Solutes and colloids that adsorb onto aquifer solids can be released into solution by decreases in ionic strength and pH. The decreases in ionic strength also cause thermodynamic undersaturation of the groundwater with respect to some mineral species and may result in mineral dissolution. Redox conditions are also changing and may result in mineral dissolution/precipitation. The net result of these chemical variations is episodic transport of a wide range of dissolved solutes or suspended particles, a phenomenon rarely considered in contaminant transport studies

  2. Groundwater Variability Across Temporal and Spatial Scales in the Central and Northeastern U.S.

    Li, Bailing; Rodell, Matthew; Famiglietti, James S.

    2015-01-01

    Depth-to-water measurements from 181 monitoring wells in unconfined or semi-confined aquifers in nine regions of the central and northeastern U.S. were analyzed. Groundwater storage exhibited strong seasonal variations in all regions, with peaks in spring and lows in autumn, and its interannual variability was nearly unbounded, such that the impacts of droughts, floods, and excessive pumping could persist for many years. We found that the spatial variability of groundwater storage anomalies (deviations from the long term mean) increases as a power function of extent scale (square root of area). That relationship, which is linear on a log-log graph, is common to other hydrological variables but had never before been shown with groundwater data. We describe how the derived power function can be used to determine the number of wells needed to estimate regional mean groundwater storage anomalies with a desired level of accuracy, or to assess uncertainty in regional mean estimates from a set number of observations. We found that the spatial variability of groundwater storage anomalies within a region often increases with the absolute value of the regional mean anomaly, the opposite of the relationship between soil moisture spatial variability and mean. Recharge (drainage from the lowest model soil layer) simulated by the Variable Infiltration Capacity (VIC) model was compatible with observed monthly groundwater storage anomalies and month-to-month changes in groundwater storage.

  3. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  4. Optimization and large scale computation of an entropy-based moment closure

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  5. Safer operating conditions and optimal scaling-up process for cyclohexanone peroxide reaction

    Zang, Na; Qian, Xin-Ming; Liu, Zhen-Yi; Shu, Chi-Min

    2015-01-01

    Highlights: • Thermal hazard of cyclohexanone peroxide reaction was measured by experimental techniques. • Levenberg–Marquardt algorithm was adopted to evaluate kinetic parameters. • Safer operating conditions at laboratory scale were acquired by BDs and TDs. • The verified safer operating conditions were used to obtain the optimal scale-up parameters applied in industrial plants. - Abstract: The cyclohexanone peroxide reaction process, one of the eighteen hazardous chemical processes identified in China, is performed in indirectly cooled semibatch reactors. The peroxide reaction is added to a mixture of hydrogen peroxide and nitric acid, which form heterogeneous liquid–liquid systems. A simple and general procedure for building boundary and temperature diagrams of peroxide process is given here to account for the overall kinetic expressions. Such a procedure has been validated by comparison with experimental data. Thermally safer operating parameters were obtained at laboratory scale, and the scaled-up procedure was performed to give the minimum dosing time in an industrial plant, which is in favor of maximizing industrial reactor productivity. The results are of great significance for governing the peroxide reaction process apart from the thermal runaway region. It also greatly aids in determining optimization on operating parameters in industrial plants.

  6. Spatio-temporal characteristics of large scale motions in a turbulent boundary layer from direct wall shear stress measurement

    Pabon, Rommel; Barnard, Casey; Ukeiley, Lawrence; Sheplak, Mark

    2016-11-01

    Particle image velocimetry (PIV) and fluctuating wall shear stress experiments were performed on a flat plate turbulent boundary layer (TBL) under zero pressure gradient conditions. The fluctuating wall shear stress was measured using a microelectromechanical 1mm × 1mm floating element capacitive shear stress sensor (CSSS) developed at the University of Florida. The experiments elucidated the imprint of the organized motions in a TBL on the wall shear stress through its direct measurement. Spatial autocorrelation of the streamwise velocity from the PIV snapshots revealed large scale motions that scale on the order of boundary layer thickness. However, the captured inclination angle was lower than that determined using the classic method by means of wall shear stress and hot-wire anemometry (HWA) temporal cross-correlations and a frozen field hypothesis using a convection velocity. The current study suggests the large size of these motions begins to degrade the applicability of the frozen field hypothesis for the time resolved HWA experiments. The simultaneous PIV and CSSS measurements are also used for spatial reconstruction of the velocity field during conditionally sampled intense wall shear stress events. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138.

  7. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim

    2017-01-01

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  8. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  9. Coordinated learning of grid cell and place cell spatial and temporal properties: multiple scales, attention and oscillations.

    Grossberg, Stephen; Pilly, Praveen K

    2014-02-05

    A neural model proposes how entorhinal grid cells and hippocampal place cells may develop as spatial categories in a hierarchy of self-organizing maps (SOMs). The model responds to realistic rat navigational trajectories by learning both grid cells with hexagonal grid firing fields of multiple spatial scales, and place cells with one or more firing fields, that match neurophysiological data about their development in juvenile rats. Both grid and place cells can develop by detecting, learning and remembering the most frequent and energetic co-occurrences of their inputs. The model's parsimonious properties include: similar ring attractor mechanisms process linear and angular path integration inputs that drive map learning; the same SOM mechanisms can learn grid cell and place cell receptive fields; and the learning of the dorsoventral organization of multiple spatial scale modules through medial entorhinal cortex to hippocampus (HC) may use mechanisms homologous to those for temporal learning through lateral entorhinal cortex to HC ('neural relativity'). The model clarifies how top-down HC-to-entorhinal attentional mechanisms may stabilize map learning, simulates how hippocampal inactivation may disrupt grid cells, and explains data about theta, beta and gamma oscillations. The article also compares the three main types of grid cell models in the light of recent data.

  10. Decision-making by a soaring bird: time, energy and risk considerations at different spatio-temporal scales

    Fluhr, Julie; Horvitz, Nir; Sarrazin, François; Hatzofe, Ohad

    2016-01-01

    Natural selection theory suggests that mobile animals trade off time, energy and risk costs with food, safety and other pay-offs obtained by movement. We examined how birds make movement decisions by integrating aspects of flight biomechanics, movement ecology and behaviour in a hierarchical framework investigating flight track variation across several spatio-temporal scales. Using extensive global positioning system and accelerometer data from Eurasian griffon vultures (Gyps fulvus) in Israel and France, we examined soaring–gliding decision-making by comparing inbound versus outbound flights (to or from a central roost, respectively), and these (and other) home-range foraging movements (up to 300 km) versus long-range movements (longer than 300 km). We found that long-range movements and inbound flights have similar features compared with their counterparts: individuals reduced journey time by performing more efficient soaring–gliding flight, reduced energy expenditure by flapping less and were more risk-prone by gliding more steeply between thermals. Age, breeding status, wind conditions and flight altitude (but not sex) affected time and energy prioritization during flights. We therefore suggest that individuals facing time, energy and risk trade-offs during movements make similar decisions across a broad range of ecological contexts and spatial scales, presumably owing to similarity in the uncertainty about movement outcomes. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528787

  11. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  12. Optimization in the scale of nuclear power generation and the economy of nuclear power

    Suzuki, Toshiharu

    1983-01-01

    In the not too distant future, the economy of nuclear power will have to be restudied. Various conditions and circumstances supporting this economy of nuclear power tend to change, such as the decrease in power demand and supply, the diversification in base load supply sources, etc. The fragility in the economic advantage of nuclear power may thus be revealed. In the above connection, on the basis of the future outlook of the scale of nuclear power generation, that is, the further reduction of the current nuclear power program, and of the corresponding supply and demand of nuclear fuel cycle quantities, the aspect of the economic advantage of nuclear power was examined, for the purpose of optimizing the future scale of nuclear power generation (the downward revision of the scale, the establishment of the schedule of nuclear fuel cycle the stagnation of power demand and nuclear power generation costs). (Mori, K.)

  13. Economies of scale and optimal size of hospitals: Empirical results for Danish public hospitals

    Kristensen, Troels

    number of beds per hospital is estimated to be 275 beds per site. Sensitivity analysis to partial changes in model parameters yields a joint 95% confidence interval in the range 130 - 585 beds per site. Conclusions: The results indicate that it may be appropriate to consolidate the production of small...... the current configuration of Danish hospitals is subject to scale economies that may justify such plans and to estimate an optimal hospital size. Methods: We estimate cost functions using panel data on total costs, DRG-weighted casemix, and number : We estimate cost functions using panel data on total costs......, DRG-weighted casemix, and number of beds for three years from 2004-2006. A short-run cost function is used to derive estimates of long-run scale economies by applying the envelope condition. Results: We identify moderate to significant long-run economies of scale when applying two alternative We...

  14. Extreme events in total ozone: Spatio-temporal analysis from local to global scale

    Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.

    2010-05-01

    dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa

  15. A Hybrid Approach Combining the Multi-Temporal Scale Spatio-Temporal Network with the Continuous Triangular Model for Exploring Dynamic Interactions in Movement Data: A Case Study of Football

    Pengdong Zhang

    2018-01-01

    Full Text Available Benefiting from recent advantages in location-aware technologies, movement data are becoming ubiquitous. Hence, numerous research topics with respect to movement data have been undertaken. Yet, the research of dynamic interactions in movement data is still in its infancy. In this paper, we propose a hybrid approach combining the multi-temporal scale spatio-temporal network (MTSSTN and the continuous triangular model (CTM for exploring dynamic interactions in movement data. The approach mainly includes four steps: first, the relative trajectory calculus (RTC is used to derive three types of interaction patterns; second, for each interaction pattern, a corresponding MTSSTN is generated; third, for each MTSSTN, the interaction intensity measures and three centrality measures (i.e., degree, betweenness and closeness are calculated; finally, the results are visualized at multiple temporal scales using the CTM and analyzed based on the generated CTM diagrams. Based on the proposed approach, three distinctive aims can be achieved for each interaction pattern at multiple temporal scales: (1 exploring the interaction intensities between any two individuals; (2 exploring the interaction intensities among multiple individuals, and (3 exploring the importance of each individual and identifying the most important individuals. The movement data obtained from a real football match are used as a case study to validate the effectiveness of the proposed approach. The results demonstrate that the proposed approach is useful in exploring dynamic interactions in football movement data and discovering insightful information.

  16. Temporal networks

    Saramäki, Jari

    2013-01-01

    The concept of temporal networks is an extension of complex networks as a modeling framework to include information on when interactions between nodes happen. Many studies of the last decade examine how the static network structure affect dynamic systems on the network. In this traditional approach  the temporal aspects are pre-encoded in the dynamic system model. Temporal-network methods, on the other hand, lift the temporal information from the level of system dynamics to the mathematical representation of the contact network itself. This framework becomes particularly useful for cases where there is a lot of structure and heterogeneity both in the timings of interaction events and the network topology. The advantage compared to common static network approaches is the ability to design more accurate models in order to explain and predict large-scale dynamic phenomena (such as, e.g., epidemic outbreaks and other spreading phenomena). On the other hand, temporal network methods are mathematically and concept...

  17. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  18. Optimization of design and operating parameters in a pilot scale Jameson cell for slime coal cleaning

    Hacifazlioglu, Hasan; Toroglu, Ihsan [Department of Mining Engineering, University of Karaelmas, 67100 (Turkey)

    2007-07-15

    The Jameson flotation cell has been commonly used to treat a variety of ores (lead, zinc, copper etc.), coal and industrial minerals at commercial scale since 1989. It is especially known to be highly efficient at fine and ultrafine coal recovery. However, although the Jameson cell has quite a simple structure, it may be largely inefficient if the design and operating parameters chosen are not appropriate. In this study, the design and operating parameters of a pilot scale Jameson cell were optimized to obtain a desired metallurgical performance in the slime coal flotation. The optimized design parameters are the nozzle type, the height of the nozzle above the pulp level, the downcomer diameter and the immersion depth of the downcomer. Among the operating parameters optimized are the collector dosage, the frother dosage, the percentage of solids and the froth height. In the optimum conditions, a clean coal with an ash content of 14.90% was obtained from the sample slime having 45.30% ash with a combustible recovery of 74.20%. In addition, a new type nozzle was developed for the Jameson cell, which led to an increase of about 9% in the combustible recovery value.

  19. Design of optimal input–output scaling factors based fuzzy PSS using bat algorithm

    D.K. Sambariya

    2016-06-01

    Full Text Available In this article, a fuzzy logic based power system stabilizer (FPSS is designed by tuning its input–output scaling factors. Two input signals to FPSS are considered as change of speed and change in power, and the output signal is considered as a correcting voltage signal. The normalizing factors of these signals are considered as the optimization problem with minimization of integral of square error in single-machine and multi-machine power systems. These factors are optimally determined with bat algorithm (BA and considered as scaling factors of FPSS. The performance of power system with such a designed BA based FPSS (BA-FPSS is compared to that of response with FPSS, Harmony Search Algorithm based FPSS (HSA-FPSS and Particle Swarm Optimization based FPSS (PSO-FPSS. The systems considered are single-machine connected to infinite-bus, two-area 4-machine 10-bus and IEEE New England 10-machine 39-bus power systems for evaluating the performance of BA-FPSS. The comparison is carried out in terms of the integral of time-weighted absolute error (ITAE, integral of absolute error (IAE and integral of square error (ISE of speed response for systems with FPSS, HSA-FPSS and BA-FPSS. The superior performance of systems with BA-FPSS is established considering eight plant conditions of each system, which represents the wide range of operating conditions.

  20. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  1. Design specific joint optimization of masks and sources on a very large scale

    Lai, K.; Gabrani, M.; Demaris, D.; Casati, N.; Torres, A.; Sarkar, S.; Strenski, P.; Bagheri, S.; Scarpazza, D.; Rosenbluth, A. E.; Melville, D. O.; Wächter, A.; Lee, J.; Austel, V.; Szeto-Millstone, M.; Tian, K.; Barahona, F.; Inoue, T.; Sakamoto, M.

    2011-04-01

    Joint optimization (JO) of source and mask together is known to produce better SMO solutions than sequential optimization of the source and the mask. However, large scale JO problems are very difficult to solve because the global impact of the source variables causes an enormous number of mask variables to be coupled together. This work presents innovation that minimize this runtime bottleneck. The proposed SMO parallelization algorithm allows separate mask regions to be processed efficiently across multiple CPUs in a high performance computing (HPC) environment, despite the fact that a truly joint optimization is being carried out with source variables that interact across the entire mask. Building on this engine a progressive deletion (PD) method was developed that can directly compute "binding constructs" for the optimization, i.e. our method can essentially determine the particular feature content which limits the process window attainable by the optimum source. This method allows us to minimize the uncertainty inherent to different clustering/ranking methods in seeking an overall optimum source that results from the use of heuristic metrics. An objective benchmarking of the effectiveness of different pattern sampling methods was performed during postoptimization analysis. The PD serves as a golden standard for us to develop optimum pattern clustering/ranking algorithms. With this work, it is shown that it is not necessary to exhaustively optimize the entire mask together with the source in order to identify these binding clips. If the number of clips to be optimized exceeds the practical limit of the parallel SMO engine one can starts with a pattern selection step to achieve high clip count compression before SMO. With this LSSO capability one can address the challenging problem of layout-specific design, or improve the technology source as cell layouts and sample layouts replace lithography test structures in the development cycle.

  2. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  3. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  4. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  5. Task driven optimal leg trajectories in insect-scale legged microrobots

    Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert

    Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.

  6. Process optimization of large-scale production of recombinant adeno-associated vectors using dielectric spectroscopy.

    Negrete, Alejandro; Esteban, Geoffrey; Kotin, Robert M

    2007-09-01

    A well-characterized manufacturing process for the large-scale production of recombinant adeno-associated vectors (rAAV) for gene therapy applications is required to meet current and future demands for pre-clinical and clinical studies and potential commercialization. Economic considerations argue in favor of suspension culture-based production. Currently, the only feasible method for large-scale rAAV production utilizes baculovirus expression vectors and insect cells in suspension cultures. To maximize yields and achieve reproducibility between batches, online monitoring of various metabolic and physical parameters is useful for characterizing early stages of baculovirus-infected insect cells. In this study, rAAVs were produced at 40-l scale yielding ~1 x 10(15) particles. During the process, dielectric spectroscopy was performed by real time scanning in radio frequencies between 300 kHz and 10 MHz. The corresponding permittivity values were correlated with the rAAV production. Both infected and uninfected reached a maximum value; however, only infected cell cultures permittivity profile reached a second maximum value. This effect was correlated with the optimal harvest time for rAAV production. Analysis of rAAV indicated the harvesting time around 48 h post-infection (hpi), and 72 hpi produced similar quantities of biologically active rAAV. Thus, if operated continuously, the 24-h reduction in the production process of rAAV gives sufficient time for additional 18 runs a year corresponding to an extra production of ~2 x 10(16) particles. As part of large-scale optimization studies, this new finding will facilitate the bioprocessing scale-up of rAAV and other bioproducts.

  7. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  8. Basic investigation of particle swarm optimization performance in a reduced scale PWR passive safety system design

    Cunha, Joao J. da; Lapa, Celso Marcelo F.; Alvim, Antonio Carlos M.; Lima, Carlos A. Souza; Pereira, Claudio Marcio do N.A.

    2010-01-01

    This work presents a methodology to investigate the viability of using particle swarm optimization technique to obtain the best combination of physical and operational parameters that lead to the best adjusted dimensionless groups, calculated by similarity laws, that are able to simulate the most relevant physical phenomena in single-phase flow under natural circulation and to offer an appropriate alternative reduced scale design for reactor primary loops with this flow characteristics. A PWR reactor core, under natural circulation, based on LOFT test facility, was used as the case study. The particle swarm optimization technique was applied to a problem with these thermo-hydraulics conditions and results demonstrated the viability and adequacy of the method to design similar systems with these characteristics.

  9. Basic investigation of particle swarm optimization performance in a reduced scale PWR passive safety system design

    Cunha, Joao J. da [Eletronuclear Eletrobras Termonuclear, Gerencia de Analise de Seguranca Nuclear, Rua da Candelaria, 65, 7o andar. Centro, Rio de Janeiro 20091-906 (Brazil); Lapa, Celso Marcelo F., E-mail: lapa@ien.gov.b [Instituto de Engenharia Nuclear, Divisao de Reatores/PPGIEN, P.O. Box 68550, Rua Helio de Almeida 75 Cidade Universitaria, Ilha do Fundao, Rio de Janeiro 21941-972 (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (Brazil); Alvim, Antonio Carlos M. [Universidade Federal do Rio de Janeiro, COPPE/Nuclear, P.O. Box 68509, Cidade Universitaria, Ilha do Fundao s/n, Rio de Janeiro 21945-970 (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (Brazil); Lima, Carlos A. Souza [Instituto de Engenharia Nuclear, Divisao de Reatores/PPGIEN, P.O. Box 68550, Rua Helio de Almeida 75 Cidade Universitaria, Ilha do Fundao, Rio de Janeiro 21941-972 (Brazil); Instituto Politecnico, Universidade do Estado do Rio de Janeiro, Pos-Graduacao em Modelagem Computacional, Rua Alberto Rangel, s/n, Vila Nova, Nova Friburgo 28630-050 (Brazil); Pereira, Claudio Marcio do N.A. [Instituto de Engenharia Nuclear, Divisao de Reatores/PPGIEN, P.O. Box 68550, Rua Helio de Almeida 75 Cidade Universitaria, Ilha do Fundao, Rio de Janeiro 21941-972 (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (Brazil)

    2010-03-15

    This work presents a methodology to investigate the viability of using particle swarm optimization technique to obtain the best combination of physical and operational parameters that lead to the best adjusted dimensionless groups, calculated by similarity laws, that are able to simulate the most relevant physical phenomena in single-phase flow under natural circulation and to offer an appropriate alternative reduced scale design for reactor primary loops with this flow characteristics. A PWR reactor core, under natural circulation, based on LOFT test facility, was used as the case study. The particle swarm optimization technique was applied to a problem with these thermo-hydraulics conditions and results demonstrated the viability and adequacy of the method to design similar systems with these characteristics.

  10. Optimization-based methodology for wastewater treatment plant synthesis – a full scale retrofitting case study

    Bozkurt, Hande; Gernaey, Krist; Sin, Gürkan

    2015-01-01

    Existing wastewater treatment plants (WWTP) need retrofitting in order to better handle changes in the wastewater flow and composition, reduce operational costs as well as meet newer and stricter regulatory standards on the effluent discharge limits. In this study, we use an optimization based...... technologies. The superstructure optimization problem is formulated as a Mixed Integer (non)Linear Programming problem and solved for different scenarios - represented by different objective functions and constraint definitions. A full-scale domestic wastewater treatment plant (265,000 PE) is used as a case...... framework to manage the multi-criteria WWTP design/retrofit problem for domestic wastewater treatment. The design space (i.e. alternative treatment technologies) is represented in a superstructure, which is coupled with a database containing data for both performance and economics of the novel alternative...

  11. Comparison of particle swarm optimization and dynamic programming for large scale hydro unit load dispatch

    Cheng Chuntian; Liao Shengli; Tang Zitian; Zhao Mingyan

    2009-01-01

    Dynamic programming (DP) is one of classic and sophisticated optimization methods that have successfully been applied to solve the problem of hydro unit load dispatch (HULD). However, DP will be faced with the curse of dimensionality with the increase of unit number and installed generating capacity of hydropower station. With the appearance of the huge hydropower station similar to the Three George with 26 generators of 700 MW, it is hard to apply the DP to large scale HULD problem. It is crucial to seek for other optimization techniques in order to improve the operation quality and efficiency. Different with the most of literature about power generation scheduling that focused on the comparisons of novel PSO algorithms with other techniques, the paper will pay emphasis on comparison study of PSO with DP based on a case hydropower station. The objective of study is to seek for an effective and feasible method for the large scale of hydropower station of the current and future in China. This paper first compares the performance of PSO and DP using a sample load curve of the Wujiangdu hydropower plant located in the upper stream of the Yangtze River in China and contained five units with the installed capacity of 1250 MW. Next, the effect of different load interval and unit number on the optimal results and efficiency of two methods has also been implemented. The comparison results show that the PSO is feasible for HULD. Furthermore, we simulated the effect of the magnitude of unit number and load capacity on the optimal results and cost time. The simulation comparisons show that PSO has a great advantage over DP in the efficiency and will be one of effective methods for HULD problem of huge hydropower stations.

  12. Comparison of particle swarm optimization and dynamic programming for large scale hydro unit load dispatch

    Cheng Chuntian, E-mail: ctcheng@dlut.edu.c [Department of Civil and Hydraulic Engineering, Dalian University of Technology, 116024 Dalian (China); Liao Shengli; Tang Zitian [Department of Civil and Hydraulic Engineering, Dalian University of Technology, 116024 Dalian (China); Zhao Mingyan [Department of Environmental Science and Engineering, Tsinghua University, 100084 Beijing (China)

    2009-12-15

    Dynamic programming (DP) is one of classic and sophisticated optimization methods that have successfully been applied to solve the problem of hydro unit load dispatch (HULD). However, DP will be faced with the curse of dimensionality with the increase of unit number and installed generating capacity of hydropower station. With the appearance of the huge hydropower station similar to the Three George with 26 generators of 700 MW, it is hard to apply the DP to large scale HULD problem. It is crucial to seek for other optimization techniques in order to improve the operation quality and efficiency. Different with the most of literature about power generation scheduling that focused on the comparisons of novel PSO algorithms with other techniques, the paper will pay emphasis on comparison study of PSO with DP based on a case hydropower station. The objective of study is to seek for an effective and feasible method for the large scale of hydropower station of the current and future in China. This paper first compares the performance of PSO and DP using a sample load curve of the Wujiangdu hydropower plant located in the upper stream of the Yangtze River in China and contained five units with the installed capacity of 1250 MW. Next, the effect of different load interval and unit number on the optimal results and efficiency of two methods has also been implemented. The comparison results show that the PSO is feasible for HULD. Furthermore, we simulated the effect of the magnitude of unit number and load capacity on the optimal results and cost time. The simulation comparisons show that PSO has a great advantage over DP in the efficiency and will be one of effective methods for HULD problem of huge hydropower stations.

  13. Comparison of particle swarm optimization and dynamic programming for large scale hydro unit load dispatch

    Chun-tian Cheng; Sheng-li Liao; Zi-Tian Tang [Dept. of Civil and Hydraulic Engineering, Dalian Univ. of Technology, 116024 Dalian (China); Ming-yan Zhao [Dept. of Environmental Science and Engineering, Tsinghua Univ., 100084 Beijing (China)

    2009-12-15

    Dynamic programming (DP) is one of classic and sophisticated optimization methods that have successfully been applied to solve the problem of hydro unit load dispatch (HULD). However, DP will be faced with the curse of dimensionality with the increase of unit number and installed generating capacity of hydropower station. With the appearance of the huge hydropower station similar to the Three George with 26 generators of 700 MW, it is hard to apply the DP to large scale HULD problem. It is crucial to seek for other optimization techniques in order to improve the operation quality and efficiency. Different with the most of literature about power generation scheduling that focused on the comparisons of novel PSO algorithms with other techniques, the paper will pay emphasis on comparison study of PSO with DP based on a case hydropower station. The objective of study is to seek for an effective and feasible method for the large scale of hydropower station of the current and future in China. This paper first compares the performance of PSO and DP using a sample load curve of the Wujiangdu hydropower plant located in the upper stream of the Yangtze River in China and contained five units with the installed capacity of 1250 MW. Next, the effect of different load interval and unit number on the optimal results and efficiency of two methods has also been implemented. The comparison results show that the PSO is feasible for HULD. Furthermore, we simulated the effect of the magnitude of unit number and load capacity on the optimal results and cost time. The simulation comparisons show that PSO has a great advantage over DP in the efficiency and will be one of effective methods for HULD problem of huge hydropower stations. (author)

  14. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on

  15. Exploring the potential of Sentinel-1 data for regional scale slope instability detection using multi-temporal interferometry

    Wasowski, Janusz; Bovenga, Fabio; Nutricato, Raffaele; Nitti, Davide Oscar; Chiaradia, Maria Teresa; Refice, Alberto; Pasquariello, Guido

    2016-04-01

    Launched in 2014, the European Space Agency (ESA) Sentinel-1 satellite carrying a medium resolution (20 m) C-Band Synthetic Aperture Radar (SAR) sensor holds much promise for new applications of multi-temporal interferometry (MTI) in landslide assessment. Specifically, the regularity of acquisitions, timeliness of data delivery, shorter repeat cycle (currently 12 days with Sentinel-1A sensor), and flexible incidence angle geometry, all imply better practical utility of MTI relying on Sentinel-1 with respect to MTI based on data from earlier ESA's satellite radar C-band sensors (ERS1/2, ENVISAT). Furthermore, the upcoming launch of Sentinel-1B will cut down the repeat cycle to 6 days, thereby further improving temporal coherence and quality and coverage of MTI products. Taking advantage of the Interferometric Wide (IW) Swath acquisition mode of Sentinel-1 (images covering a 250 km swath on the ground), in this work we test the potential of such data for regional scale slope instability detection through MTI. Our test area includes the landslide-prone Apennine Mountains of Southern Italy. We rely on over 30 Sentinel-1 images, most of which acquired in 2015, and MTI processing through the SPINUA algorithm (Stable Points INterferometry in Un-urbanized Areas). The potential of MTI results based on Sentinel-1 data is assessed by comparing the detected ground surface displacements with the MTI results obtained for the same test area using the C-Band data acquired by ERS1/2 and ENVISAT in 1990s and 2000s. Although the initial results are encouraging, it seems evident that longer-term (few years) acquisitions of Sentinel-1 are necessary to reliably detect some extremely slow movements, which were observed in the last two decades and are likely to be still present in peri-urban areas of many hilltop towns in the Apennine Mts. The MTI results obtained from Sentinel-1 data are also locally compared with the MTI outcomes based on the high resolution (3 m) TerraSAR-X imagery

  16. Isometric Scaling in Developing Long Bones Is Achieved by an Optimal Epiphyseal Growth Balance.

    Stern, Tomer; Aviram, Rona; Rot, Chagai; Galili, Tal; Sharir, Amnon; Kalish Achrai, Noga; Keller, Yosi; Shahar, Ron; Zelzer, Elazar

    2015-08-01

    One of the major challenges that developing organs face is scaling, that is, the adjustment of physical proportions during the massive increase in size. Although organ scaling is fundamental for development and function, little is known about the mechanisms that regulate it. Bone superstructures are projections that typically serve for tendon and ligament insertion or articulation and, therefore, their position along the bone is crucial for musculoskeletal functionality. As bones are rigid structures that elongate only from their ends, it is unclear how superstructure positions are regulated during growth to end up in the right locations. Here, we document the process of longitudinal scaling in developing mouse long bones and uncover the mechanism that regulates it. To that end, we performed a computational analysis of hundreds of three-dimensional micro-CT images, using a newly developed method for recovering the morphogenetic sequence of developing bones. Strikingly, analysis revealed that the relative position of all superstructures along the bone is highly preserved during more than a 5-fold increase in length, indicating isometric scaling. It has been suggested that during development, bone superstructures are continuously reconstructed and relocated along the shaft, a process known as drift. Surprisingly, our results showed that most superstructures did not drift at all. Instead, we identified a novel mechanism for bone scaling, whereby each bone exhibits a specific and unique balance between proximal and distal growth rates, which accurately maintains the relative position of its superstructures. Moreover, we show mathematically that this mechanism minimizes the cumulative drift of all superstructures, thereby optimizing the scaling process. Our study reveals a general mechanism for the scaling of developing bones. More broadly, these findings suggest an evolutionary mechanism that facilitates variability in bone morphology by controlling the activity of

  17. Isometric Scaling in Developing Long Bones Is Achieved by an Optimal Epiphyseal Growth Balance

    Stern, Tomer; Aviram, Rona; Rot, Chagai; Galili, Tal; Sharir, Amnon; Kalish Achrai, Noga; Keller, Yosi; Shahar, Ron; Zelzer, Elazar

    2015-01-01

    One of the major challenges that developing organs face is scaling, that is, the adjustment of physical proportions during the massive increase in size. Although organ scaling is fundamental for development and function, little is known about the mechanisms that regulate it. Bone superstructures are projections that typically serve for tendon and ligament insertion or articulation and, therefore, their position along the bone is crucial for musculoskeletal functionality. As bones are rigid structures that elongate only from their ends, it is unclear how superstructure positions are regulated during growth to end up in the right locations. Here, we document the process of longitudinal scaling in developing mouse long bones and uncover the mechanism that regulates it. To that end, we performed a computational analysis of hundreds of three-dimensional micro-CT images, using a newly developed method for recovering the morphogenetic sequence of developing bones. Strikingly, analysis revealed that the relative position of all superstructures along the bone is highly preserved during more than a 5-fold increase in length, indicating isometric scaling. It has been suggested that during development, bone superstructures are continuously reconstructed and relocated along the shaft, a process known as drift. Surprisingly, our results showed that most superstructures did not drift at all. Instead, we identified a novel mechanism for bone scaling, whereby each bone exhibits a specific and unique balance between proximal and distal growth rates, which accurately maintains the relative position of its superstructures. Moreover, we show mathematically that this mechanism minimizes the cumulative drift of all superstructures, thereby optimizing the scaling process. Our study reveals a general mechanism for the scaling of developing bones. More broadly, these findings suggest an evolutionary mechanism that facilitates variability in bone morphology by controlling the activity of

  18. Fine-scale hydrodynamics influence the spatio-temporal distribution of harbour porpoises at a coastal hotspot

    Jones, A. R.; Hosegood, P.; Wynn, R. B.; De Boer, M. N.; Butler-Cowdry, S.; Embling, C. B.

    2014-11-01

    The coastal Runnelstone Reef, off southwest Cornwall (UK), is characterised by complex topography and strong tidal flows and is a known high-density site for harbour porpoise (Phocoena phocoena); a European protected species. Using a multidisciplinary dataset including: porpoise sightings from a multi-year land-based survey, Acoustic Doppler Current Profiling (ADCP), vertical profiling of water properties and high-resolution bathymetry; we investigate how interactions between tidal flow and topography drive the fine-scale porpoise spatio-temporal distribution at the site. Porpoise sightings were distributed non-uniformly within the survey area with highest sighting density recorded in areas with steep slopes and moderate depths. Greater numbers of sightings were recorded during strong westward (ebbing) tidal flows compared to strong eastward (flooding) flows and slack water periods. ADCP and Conductivity Temperature Depth (CTD) data identified fine-scale hydrodynamic features, associated with cross-reef tidal flows in the sections of the survey area with the highest recorded densities of porpoises. We observed layered, vertically sheared flows that were susceptible to the generation of turbulence by shear instability. Additionally, the intense, oscillatory near surface currents led to hydraulically controlled flow that transitioned from subcritical to supercritical conditions; indicating that highly turbulent and energetic hydraulic jumps were generated along the eastern and western slopes of the reef. The depression and release of isopycnals in the lee of the reef during cross-reef flows revealed that the flow released lee waves during upslope currents at specific phases of the tidal cycle when the highest sighting rates were recorded. The results of this unique, fine-scale field study provide new insights into specific hydrodynamic features, produced through tidal forcing, that may be important for creating predictable foraging opportunities for porpoises at a

  19. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  20. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  1. Multi Scale Multi Temporal Near Real Time Approach for Volcanic Eruptions monitoring, Test Case: Mt Etna eruption 2017

    Buongiorno, M. F.; Silvestri, M.; Musacchio, M.

    2017-12-01

    In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).

  2. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    Winands, G.J.J.; Liu, Zhen; Pemen, A.J.M.; Heesch, van E.J.M.; Yan, K.; Veldhuizen, van E.M.

    2006-01-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of

  3. Network-scale spatial and temporal variation in Chinook salmon (Oncorhynchus tshawytscha) redd distributions: patterns inferred from spatially continuous replicate surveys

    Daniel J. Isaak; Russell F. Thurow

    2006-01-01

    Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...

  4. Collective synchronization of self/non-self discrimination in T cell activation, across multiple spatio-temporal scales

    Altan-Bonnet, Gregoire

    The immune system is a collection of cells whose function is to eradicate pathogenic infections and malignant tumors while protecting healthy tissues. Recent work has delineated key molecular and cellular mechanisms associated with the ability to discriminate self from non-self agents. For example, structural studies have quantified the biophysical characteristics of antigenic molecules (those prone to trigger lymphocyte activation and a subsequent immune response). However, such molecular mechanisms were found to be highly unreliable at the individual cellular level. We will present recent efforts to build experimentally validated computational models of the immune responses at the collective cell level. Such models have become critical to delineate how higher-level integration through nonlinear amplification in signal transduction, dynamic feedback in lymphocyte differentiation and cell-to-cell communication allows the immune system to enforce reliable self/non-self discrimination at the organism level. In particular, we will present recent results demonstrating how T cells tune their antigen discrimination according to cytokine cues, and how competition for cytokine within polyclonal populations of cells shape the repertoire of responding clones. Additionally, we will present recent theoretical and experimental results demonstrating how competition between diffusion and consumption of cytokines determine the range of cell-cell communications within lymphoid organs. Finally, we will discuss how biochemically explicit models, combined with quantitative experimental validation, unravel the relevance of new feedbacks for immune regulations across multiple spatial and temporal scales.

  5. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  6. A Precision-Positioning Method for a High-Acceleration Low-Load Mechanism Based on Optimal Spatial and Temporal Distribution of Inertial Energy

    Xin Chen

    2015-09-01

    Full Text Available High-speed and precision positioning are fundamental requirements for high-acceleration low-load mechanisms in integrated circuit (IC packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiffness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM, and then we selected the modified ESLM for optimal spatial distribution of inertial energy; hence, not only the stiffness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verified on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable velocity planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms.

  7. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.

    2013-12-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value

  8. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    Zhang, L; Zhuge, W L; Liu, S J; Zhang, Y J; Peng, J

    2013-01-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (W R and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β 6 , the exit tip radius R 6t and hub radius R 6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency η ts , and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency η ts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the

  9. Multi-scale approach to the environmental factors effects on spatio-temporal variability of Chironomus salinarius (Diptera: Chironomidae) in a French coastal lagoon

    Cartier, V.; Claret, C.; Garnier, R.; Fayolle, S.; Franquet, E.

    2010-03-01

    The complexity of the relationships between environmental factors and organisms can be revealed by sampling designs which consider the contribution to variability of different temporal and spatial scales, compared to total variability. From a management perspective, a multi-scale approach can lead to time-saving. Identifying environmental patterns that help maintain patchy distribution is fundamental in studying coastal lagoons, transition zones between continental and marine waters characterised by great environmental variability on spatial and temporal scales. They often present organic enrichment inducing decreased species richness and increased densities of opportunist species like C hironomus salinarius, a common species that tends to swarm and thus constitutes a nuisance for human populations. This species is dominant in the Bolmon lagoon, a French Mediterranean coastal lagoon under eutrophication. Our objective was to quantify variability due to both spatial and temporal scales and identify the contribution of different environmental factors to this variability. The population of C. salinarius was sampled from June 2007 to June 2008 every two months at 12 sites located in two areas of the Bolmon lagoon, at two different depths, with three sites per area-depth combination. Environmental factors (temperature, dissolved oxygen both in sediment and under water surface, sediment organic matter content and grain size) and microbial activities (i.e. hydrolase activities) were also considered as explanatory factors of chironomid densities and distribution. ANOVA analysis reveals significant spatial differences regarding the distribution of chironomid larvae for the area and the depth scales and their interaction. The spatial effect is also revealed for dissolved oxygen (water), salinity and fine particles (area scale), and for water column depth. All factors but water column depth show a temporal effect. Spearman's correlations highlight the seasonal effect

  10. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  11. Characterisation of Hydrological Response to Rainfall at Multi Spatio-Temporal Scales in Savannas of Semi-Arid Australia

    Ben Jarihani

    2017-07-01

    Full Text Available Rainfall is the main driver of hydrological processes in dryland environments and characterising the rainfall variability and processes of runoff generation are critical for understanding ecosystem function of catchments. Using remote sensing and in situ data sets, we assess the spatial and temporal variability of the rainfall, rainfall–runoff response, and effects on runoff coefficients of antecedent soil moisture and ground cover at different spatial scales. This analysis was undertaken in the Upper Burdekin catchment, northeast Australia, which is a major contributor of sediment and nutrients to the Great Barrier Reef. The high temporal and spatial variability of rainfall are found to exert significant controls on runoff generation processes. Rainfall amount and intensity are the primary runoff controls, and runoff coefficients for wet antecedent conditions were higher than for dry conditions. The majority of runoff occurred via surface runoff generation mechanisms, with subsurface runoff likely contributing little runoff due to the intense nature of rainfall events. MODIS monthly ground cover data showed better results in distinguishing effects of ground cover on runoff that Landsat-derived seasonal ground cover data. We conclude that in the range of moderate to large catchments (193–36,260 km2 runoff generation processes are sensitive to both antecedent soil moisture and ground cover. A higher runoff–ground cover correlation in drier months with sparse ground cover highlighted the critical role of cover at the onset of the wet season (driest period and how runoff generation is more sensitive to cover in drier months than in wetter months. The monthly water balance analysis indicates that runoff generation in wetter months (January and February is partially influenced by saturation overland flow, most likely confined to saturated soils in riparian corridors, swales, and areas of shallow soil. By March and continuing through October

  12. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  13. The SWAN-SCALE code for the optimization of critical systems

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-01-01

    The SWAN optimization code was recently developed to identify the maximum value of k eff for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k eff and on the corresponding system composition

  14. Cost and optimal feed-in tariff for small scale photovoltaic systems in China

    Rigter, Jasper; Vidican, Georgeta

    2010-01-01

    China has recently become a dominant player in the solar photovoltaic (PV) industry, producing more than one-third of the global supply of solar cells in 2008. However, as of 2008, less than 1% of global installations were based in China. Recently, the government has stated its grand ambitions of expanding the share of electricity derived from solar power. As part of this initiative, policy makers are currently in the process of drafting a feed-in tariff policy to support the development of the solar energy market. In this paper, we aim to calculate what the level of such a tariff should be. We develop a closed form equation for the cost of PV, and use forecasts on prices of solar systems to derive an optimal feed-in tariff, including a digression rate. The focus is on the potential of residential and small scale commercial solar PV installations. We show that the cost of small scale PV in China has decreased rapidly during the period 2005-2009. Our analysis also shows that optimal feed-in tariffs vary widely between regions within China, and that grid parity could be reached in large parts of the country depending on the expected escalation in electricity prices. (author)

  15. Evolutionary-Optimized Photonic Network Structure in White Beetle Wing Scales.

    Wilts, Bodo D; Sheng, Xiaoyuan; Holler, Mirko; Diaz, Ana; Guizar-Sicairos, Manuel; Raabe, Jörg; Hoppe, Robert; Liu, Shu-Hao; Langford, Richard; Onelli, Olimpia D; Chen, Duyu; Torquato, Salvatore; Steiner, Ullrich; Schroer, Christian G; Vignolini, Silvia; Sepe, Alessandro

    2018-05-01

    Most studies of structural color in nature concern periodic arrays, which through the interference of light create color. The "color" white however relies on the multiple scattering of light within a randomly structured medium, which randomizes the direction and phase of incident light. Opaque white materials therefore must be much thicker than periodic structures. It is known that flying insects create "white" in extremely thin layers. This raises the question, whether evolution has optimized the wing scale morphology for white reflection at a minimum material use. This hypothesis is difficult to prove, since this requires the detailed knowledge of the scattering morphology combined with a suitable theoretical model. Here, a cryoptychographic X-ray tomography method is employed to obtain a full 3D structural dataset of the network morphology within a white beetle wing scale. By digitally manipulating this 3D representation, this study demonstrates that this morphology indeed provides the highest white retroreflection at the minimum use of material, and hence weight for the organism. Changing any of the network parameters (within the parameter space accessible by biological materials) either increases the weight, increases the thickness, or reduces reflectivity, providing clear evidence for the evolutionary optimization of this morphology. © 2017 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Optimizing multiple reliable forward contracts for reservoir allocation using multitime scale streamflow forecasts

    Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward

    2017-03-01

    Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.

  17. Spatio-temporal optimization of agricultural practices to achieve a sustainable development at basin level; framework of a case study in Colombia

    Uribe, Natalia; corzo, Gerald; Solomatine, Dimitri

    2016-04-01

    The flood events present during the last years in different basins of the Colombian territory have raised questions on the sensitivity of the regions and if this regions have common features. From previous studies it seems important features in the sensitivity of the flood process were: land cover change, precipitation anomalies and these related to impacts of agriculture management and water management deficiencies, among others. A significant government investment in the outreach activities for adopting and promoting the Colombia National Action Plan on Climate Change (NAPCC) is being carried out in different sectors and regions, having as a priority the agriculture sector. However, more information is still needed in the local environment in order to assess were the regions have this sensitivity. Also the continuous change in one region with seasonal agricultural practices have been pointed out as a critical information for optimal sustainable development. This combined spatio-temporal dynamics of crops cycle in relation to climate change (or variations) has an important impact on flooding events at basin areas. This research will develop on the assessment and optimization of the aggregated impact of flood events due to determinate the spatio-temporal dynamic of changes in agricultural management practices. A number of common best agricultural practices have been identified to explore their effect in a spatial hydrological model that will evaluate overall changes. The optimization process consists on the evaluation of best performance in the agricultural production, without having to change crops activities or move to other regions. To achieve this objectives a deep analysis of different models combined with current and future climate scenarios have been planned. An algorithm have been formulated to cover the parametric updates such that the optimal temporal identification will be evaluated in different region on the case study area. Different hydroinformatics

  18. An optimal beam alignment method for large-scale distributed space surveillance radar system

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  19. Optimal Capacity Allocation of Large-Scale Wind-PV-Battery Units

    Kehe Wu

    2014-01-01

    Full Text Available An optimal capacity allocation of large-scale wind-photovoltaic- (PV- battery units was proposed. First, an output power model was established according to meteorological conditions. Then, a wind-PV-battery unit was connected to the power grid as a power-generation unit with a rated capacity under a fixed coordinated operation strategy. Second, the utilization rate of renewable energy sources and maximum wind-PV complementation was considered and the objective function of full life cycle-net present cost (NPC was calculated through hybrid iteration/adaptive hybrid genetic algorithm (HIAGA. The optimal capacity ratio among wind generator, PV array, and battery device also was calculated simultaneously. A simulation was conducted based on the wind-PV-battery unit in Zhangbei, China. Results showed that a wind-PV-battery unit could effectively minimize the NPC of power-generation units under a stable grid-connected operation. Finally, the sensitivity analysis of the wind-PV-battery unit demonstrated that the optimization result was closely related to potential wind-solar resources and government support. Regions with rich wind resources and a reasonable government energy policy could improve the economic efficiency of their power-generation units.

  20. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  1. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell

  2. Multivariate Spatio-Temporal Clustering: A Framework for Integrating Disparate Data to Understand Network Representativeness and Scaling Up Sparse Ecosystem Measurements

    Hoffman, F. M.; Kumar, J.; Maddalena, D. M.; Langford, Z.; Hargrove, W. W.

    2014-12-01

    Disparate in situ and remote sensing time series data are being collected to understand the structure and function of ecosystems and how they may be affected by climate change. However, resource and logistical constraints limit the frequency and extent of observations, particularly in the harsh environments of the arctic and the tropics, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent variability at desired scales. These regions host large areas of potentially vulnerable ecosystems that are poorly represented in Earth system models (ESMs), motivating two new field campaigns, called Next Generation Ecosystem Experiments (NGEE) for the Arctic and Tropics, funded by the U.S. Department of Energy. Multivariate Spatio-Temporal Clustering (MSTC) provides a quantitative methodology for stratifying sampling domains, informing site selection, and determining the representativeness of measurement sites and networks. We applied MSTC to down-scaled general circulation model results and data for the State of Alaska at a 4 km2 resolution to define maps of ecoregions for the present (2000-2009) and future (2090-2099), showing how combinations of 37 bioclimatic characteristics are distributed and how they may shift in the future. Optimal representative sampling locations were identified on present and future ecoregion maps, and representativeness maps for candidate sampling locations were produced. We also applied MSTC to remotely sensed LiDAR measurements and multi-spectral imagery from the WorldView-2 satellite at a resolution of about 5 m2 within the Barrow Environmental Observatory (BEO) in Alaska. At this resolution, polygonal ground features—such as centers, edges, rims, and troughs—can be distinguished. Using these remote sensing data, we up-scaled vegetation distribution data collected on these polygonal ground features to a large area of the BEO to provide distributions of plant functional types that can

  3. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution

    Francesco Pennacchio

    2017-07-01

    Full Text Available Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect. Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons.

  4. Classifying post-stroke fatigue: Optimal cut-off on the Fatigue Assessment Scale.

    Cumming, Toby B; Mead, Gillian

    2017-12-01

    Post-stroke fatigue is common and has debilitating effects on independence and quality of life. The Fatigue Assessment Scale (FAS) is a valid screening tool for fatigue after stroke, but there is no established cut-off. We sought to identify the optimal cut-off for classifying post-stroke fatigue on the FAS. In retrospective analysis of two independent datasets (the '2015' and '2007' studies), we evaluated the predictive validity of FAS score against a case definition of fatigue (the criterion standard). Area under the curve (AUC) and sensitivity and specificity at the optimal cut-off were established in the larger 2015 dataset (n=126), and then independently validated in the 2007 dataset (n=52). In the 2015 dataset, AUC was 0.78 (95% CI 0.70-0.86), with the optimal ≥24 cut-off giving a sensitivity of 0.82 and specificity of 0.66. The 2007 dataset had an AUC of 0.83 (95% CI 0.71-0.94), and applying the ≥24 cut-off gave a sensitivity of 0.84 and specificity of 0.67. Post-hoc analysis of the 2015 dataset revealed that using only the 3 most predictive FAS items together ('FAS-3') also yielded good validity: AUC 0.81 (95% CI 0.73-0.89), with sensitivity of 0.83 and specificity of 0.75 at the optimal ≥8 cut-off. We propose ≥24 as a cut-off for classifying post-stroke fatigue on the FAS. While further validation work is needed, this is a positive step towards a coherent approach to reporting fatigue prevalence using the FAS. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Dynamic perfusion CT: Optimizing the temporal resolution for the calculation of perfusion CT parameters in stroke patients

    Kaemena, Andreas [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany)], E-mail: andreas.kaemena@charite.de; Streitparth, Florian; Grieser, Christian; Lehmkuhl, Lukas [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany); Jamil, Basil [Department of Radiotherapy, Charite-Medical University Berlin, Schumannstr. 20/21, D-10117 Berlin (Germany); Wojtal, Katarzyna; Ricke, Jens; Pech, Maciej [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany)

    2007-10-15

    Purpose: To assess the influence of different temporal sampling rates on the accuracy of the results from cerebral perfusion CTs in patients with an acute ischemic stroke. Material and methods: Thirty consecutive patients with acute stroke symptoms received a dynamic perfusion CT (LightSpeed 16, GE). Forty millilitres of iomeprol (Imeron 400) were administered at an injection rate of 4 ml/s. After a scan delay of 7 s, two adjacent 10 mm slices at 80 kV and 190 mA were acquired in a cine mode technique with a cine duration of 49 s. Parametric maps for the blood flow (BF), blood volume (BV) and mean transit time (MTT) were calculated for temporal sampling intervals of 0.5, 1, 2, 3 and 4 s using GE's Perfusion 3 software package. In addition to the quantitative ROI data analysis, a visual perfusion map analysis was performed. Results: The perfusion analysis proved to be technically feasible with all patients. The calculated perfusion values revealed significant differences with regard to the BF, BV and MTT, depending on the employed temporal resolution. The perfusion contrast between ischemic lesions and healthy brain tissue decreased continuously at the lower temporal resolutions. The visual analysis revealed that ischemic lesions were best depicted with sampling intervals of 0.5 and 1 s. Conclusion: We recommend a temporal scan resolution of two images per second for the best detection and depiction of ischemic areas.

  6. [Clinical application and optimization of HEAD-US quantitative ultrasound assessment scale for hemophilic arthropathy].

    Li, J; Guo, X J; Ding, X L; Lyu, B M; Xiao, J; Sun, Q L; Li, D S; Zhang, W F; Zhou, J C; Li, C P; Yang, R C

    2018-02-14

    Objective: To assess the feasibility of HEAD-US scale in the clinical application of hemophilic arthropathy (HA) and propose an optimized ultrasound scoring system. Methods: From July 2015 to August 2017, 1 035 joints ultrasonographic examinations were performed in 91 patients. Melchiorre, HEAD-US (Hemophilic Early Arthropathy Detection with UltraSound) and HEAD-US-C (HEAD-US in China) scale scores were used respectively to analyze the results. The correlations between three ultrasound scales and Hemophilia Joint Health Scores (HJHS) were evaluated. The sensitivity differences of the above Ultrasonic scoring systems in evaluation of HA were compared. Results: All the 91 patients were male, with median age of 16 (4-55) years old, including 86 cases of hemophilia A and 5 cases hemophilia B. The median ( P 25 , P 75 ) of Melchiorre, HEAD-US and HEAD-US-C scores of 1 035 joints were 2(0,6), 1(0,5) and 2(0,6), respectively, and the correlation coefficients compared with HJHS was 0.747, 0.762 and 0.765 respectively, with statistical significance ( P cases of asymptomatic joints, the positive rates of Melchiorre, HEAD-US-C and HEAD-US scale score were 25.0% (95% CI 20.6%-29.6%), 17.0% (95% CI 12.6%-21.1%) and 11.9% (95% CI 8.4%-15.7%) respectively, and the difference was statistically significant ( P joints of 40 patients. The difference in variation amplitude of HEAD-US-C scores and HEAD-US scores before and after joint bleeding was statistically significant ( P <0.001). Conclusion: Compared with Melchiorre, there were similar good correlations between HEAD-US, HEAD-US-C and HJHS. HEAD-US ultrasound scoring system is quick, convenient and simple to use. The optimized HEAD-US-C scale score is more sensitive than HEAD-US, especially for patients with HA who have subclinical state, which make up for insufficiency of sensitivity in HEAD-US scoring system.

  7. Concentrations of environmental DNA (eDNA) reflect spawning salmon abundance at fine spatial and temporal scales

    Tillotson, Michael D.; Kelly, Ryan P.; Duda, Jeff; Hoy, Marshal S.; Kralj, James; Quinn, Thomas P.

    2018-01-01

    Developing fast, cost-effective assessments of wild animal abundance is an important goal for many researchers, and environmental DNA (eDNA) holds much promise for this purpose. However, the quantitative relationship between species abundance and the amount of DNA present in the environment is likely to vary substantially among taxa and with ecological context. Here, we report a strong quantitative relationship between eDNA concentration and the abundance of spawning sockeye salmon in a small stream in Alaska, USA, where we took temporally- and spatially-replicated samples during the spawning period. This high-resolution dataset suggests that (1) eDNA concentrations vary significantly day-to-day, and likely within hours, in the context of the dynamic biological event of a salmon spawning season; (2) eDNA, as detected by species-specific quantitative PCR probes, seems to be conserved over short distances (tens of meters) in running water, but degrade quickly over larger scales (ca. 1.5 km); and (3) factors other than the mere presence of live, individual fish — such as location within the stream, live/dead ratio, and water temperature — can affect the eDNA-biomass correlation in space or time. A multivariate model incorporating both biotic and abiotic variables accounted for over 75% of the eDNA variance observed, suggesting that where a system is well-characterized, it may be possible to predict species' abundance from eDNA surveys, although we underscore that species- and system-specific variables are likely to limit the generality of any given quantitative model. Nevertheless, these findings provide an important step toward quantitative applications of eDNA in conservation and management.

  8. Ecosystem service value assessment in temporal and spatial scales in Dalian, China: implications for urban development policy

    Xin, Z.; Chen, X.; Fu, G.; Li, C.

    2017-12-01

    Landscapes differ in their capacities to provide ecosystem good and services, which are the benefits humans obtain from nature. Valuation of ecosystem services is recognized as one effective way for improving the recognition and implementation for disposition of land resource and ecosystem protection. In this content, this study aims to reveal the changes in provision of ecosystem services induced by land use changes in both temporal and spatial scales in Dalian, China. Land use changes were firstly characterized based on Landsat TM images from 1984 to 2013. Results showed a severe increase in urban area, with an average increasing rate of 39.5%. Dry land occupied the largest portion of the total area which is mainly developed on the expenses of forest loss; meanwhile, policies of water-saving irrigation has promoted a conversion of paddy fields to dry land. Other categories including water, wetland, brush grass and salting were found to have relative small contrition to the total area. Assigning ecosystem service value (ESV) coefficient to each land use category, changes in ESV of the study area were assessed. Results indicated that the total ESV decreased by 21 billion from 1984 to 2013. Forest, dry land and water are the primary contributors. As for ecosystem functions, the regulation service is the most prominent which contributed to 60% of the total ESV, followed by support, supply and culture services. In addition, ESV changes were found to have a spatial variability, which shows a maximum decreasing rate in the central city, and a highest net value in the surrounding islands. The changes and distributions in land use pattern and ESV were further linked with the local city landscape planning, which has provided implications on city landscape policy making for sustaining the provision of ecosystem services and achieving sustainable development goals.

  9. Temporal sequencing of throughfall drop generation as revealed by use of a large-scale rainfall simulator

    Nanko, K.; Levia, D. F., Jr.; Iida, S.; SUN, X.; Shinohara, Y.; Sakai, N.

    2017-12-01

    Scientists have been interested in throughfall drop size and its distribution because of its importance to soil erosion and the forest water balance. An indoor experiment was employed to deepen our understanding of throughfall drop generation processes to promote better management of forested ecosystems. The indoor experiment provides a unique opportunity to examine an array of constant rainfall intensities that are ideal conditions to pick up the effect of changing intensities and not found in the fields. Throughfall drop generation was examined for three species- Cryptomeria japonica D. Don (Japanese cedar), Chamaecyparis obtusa (Siebold & Zucc.) Endl. (Japanese cypress), and Zelkova serrata Thunb. (Japanese zelkova)- under both leafed and leafless conditions in the large-scale rainfall simulator in the National Research Institute for Earth Science and Disaster Resilience (Tsukuba, Japan) at varying rainfall intensities ranging from15 to 100 mm h-1. Drop size distributions of the applied rainfall and throughfall were measured simultaneously by 20 laser disdrometers. Utilizing the drop size dataset, throughfall was separated into three components: free throughfall, canopy drip, and splash throughfall. The temporal sequencing of the throughfall components were analyzed on a 1-min interval during each experimental run. The throughfall component percentage and drop size of canopy drip differed among tree species and rainfall intensities and by elapsed time from the beginning of the rainfall event. Preliminary analysis revealed that the time differences to produce branch drip as compared to leaf (or needle) drip was partly due to differential canopy wet-up processes and the disappearance of branch drips due to canopy saturation, leading to dissimilar throughfall drop size distributions beneath the various tree species examined. This research was supported by JSPS Invitation Fellowship for Research in Japan (Grant No.: S16088) and JSPS KAKENHI (Grant No.: JP15H05626).

  10. Scaling analysis of the optimized effective potentials for the multiplet states of multivalent 3d ions

    Hamamoto, N; Satoko, C

    2006-01-01

    We apply the optimized effective potential method (OPM) to the multivalent 3d n (n = 2, ..., 8) ions; M ν+ (ν = 2, ..., 8). The total energy functional is approximated by the single-configuration Hartree-Fock. The exchange potential for the average energy configuration is decomposed into the potentials derived from F 2 (3d, 3d) and F 4 (3d, 3d) Slater integrals. To investigate properties of the density-functional potential, we have checked the scaling properties of several physical quantities such as the density, the 3d orbital and these potentials. We find that the potentials of the Slater integrals do not have the scaling property. Instead, the weighted potential V i (r) of an ion i, which is the potential of the Slater integrals times the 3d-orbital density, satisfies the scaling property q 3d i V i (r) ∼ q 3d j λ 4 V j (λr) where q i 3d is the occupation number of the 3d-orbital R 3d (r) for ion i. Furthermore, the weighted potential can be approximated by the ion-independent functional of the 3d-orbital density c k R 8/3 3d (r)/q 3d where c 2 = 0.366 and c 4 0.223. This suggests that the weighted potential can be expressed as a functional of the 3d-orbital density

  11. Scaled model guidelines for solar coronagraphs' external occulters with an optimized shape.

    Landini, Federico; Baccani, Cristian; Schweitzer, Hagen; Asoubar, Daniel; Romoli, Marco; Taccola, Matteo; Focardi, Mauro; Pancrazzi, Maurizio; Fineschi, Silvano

    2017-12-01

    One of the major challenges faced by externally occulted solar coronagraphs is the suppression of the light diffracted by the occulter edge. It is a contribution to the stray light that overwhelms the coronal signal on the focal plane and must be reduced by modifying the geometrical shape of the occulter. There is a rich literature, mostly experimental, on the appropriate choice of the most suitable shape. The problem arises when huge coronagraphs, such as those in formation flight, shall be tested in a laboratory. A recent contribution [Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757] provides the guidelines for scaling the geometry and replicate in the laboratory the flight diffraction pattern as produced by the whole solar disk and a flight occulter but leaves the conclusion on the occulter scale law somehow unjustified. This paper provides the numerical support for validating that conclusion and presents the first-ever simulation of the diffraction behind an occulter with an optimized shape along the optical axis with the solar disk as a source. This paper, together with Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757, aims at constituting a complete guide for scaling the coronagraphs' geometry.

  12. Is there an optimal pension fund size? A scale-economy analysis of administrative and investment costs

    Bikker, J.A.

    2013-01-01

    This paper investigates scale economies and the optimal scale of pension funds, estimating different cost functions with varying assumptions about the shape of the underlying average cost function: Ushaped versus monotonically declining. Using unique data for Dutch pension funds over 1992-2009, we

  13. Concurrently examining unrealistic absolute and comparative optimism: Temporal shifts, individual-difference and event-specific correlates, and behavioural outcomes.

    Ruthig, Joelle C; Gamblin, Bradlee W; Jones, Kelly; Vanderzanden, Karen; Kehn, Andre

    2017-02-01

    Researchers have spent considerable effort examining unrealistic absolute optimism and unrealistic comparative optimism, yet there is a lack of research exploring them concurrently. This longitudinal study repeatedly assessed unrealistic absolute and comparative optimism within a performance context over several months to identify the degree to which they shift as a function of proximity to performance and performance feedback, their associations with global individual difference and event-specific factors, and their link to subsequent behavioural outcomes. Results showed similar shifts in unrealistic absolute and comparative optimism based on proximity to performance and performance feedback. Moreover, increases in both types of unrealistic optimism were associated with better subsequent performance beyond the effect of prior performance. However, several differences were found between the two forms of unrealistic optimism in their associations with global individual difference factors and event-specific factors, highlighting the distinctiveness of the two constructs. © 2016 The British Psychological Society.

  14. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  15. Dynamics of metallic contaminants at a basin scale--Spatial and temporal reconstruction from four sediment cores (Loire fluvial system, France).

    Dhivert, E; Grosbois, C; Courtin-Nomade, A; Bourrain, X; Desmet, M

    2016-01-15

    From the 19th century, the Loire basin (France) presents potentially pollutant activities such as mining and heavy industries. This paper shows spatio-temporal distribution of trace elements in sediments at a basin-scale, based on a comparison of archived temporal signals recorded in four sedimentary cores. Anthropogenic sources contributing to sediment contamination are also characterized, using geochemical signatures recorded in river bank sediments of the most industrialized tributaries. This study highlights upstream-downstream differences concerning recorded contamination phases in terms of spatial influence and temporality of archiving processes. Such differences were related to (i) various spatial influences of contamination sources and (ii) polluted sediments dispersion controlled by transport capacity of metal-carrier phases and hydrosedimentary dynamics. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. How spatial and temporal rainfall variability affect runoff across basin scales: insights from field observations in the (semi-)urbanised Charlotte watershed

    Ten Veldhuis, M. C.; Smith, J. A.; Zhou, Z.

    2017-12-01

    Impacts of rainfall variability on runoff response are highly scale-dependent. Sensitivity analyses based on hydrological model simulations have shown that impacts are likely to depend on combinations of storm type, basin versus storm scale, temporal versus spatial rainfall variability. So far, few of these conclusions have been confirmed on observational grounds, since high quality datasets of spatially variable rainfall and runoff over prolonged periods are rare. Here we investigate relationships between rainfall variability and runoff response based on 30 years of radar-rainfall datasets and flow measurements for 16 hydrological basins ranging from 7 to 111 km2. Basins vary not only in scale, but also in their degree of urbanisation. We investigated temporal and spatial variability characteristics of rainfall fields across a range of spatial and temporal scales to identify main drivers for variability in runoff response. We identified 3 ranges of basin size with different temporal versus spatial rainfall variability characteristics. Total rainfall volume proved to be the dominant agent determining runoff response at all basin scales, independent of their degree of urbanisation. Peak rainfall intensity and storm core volume are of secondary importance. This applies to all runoff parameters, including runoff volume, runoff peak, volume-to-peak and lag time. Position and movement of the storm with respect to the basin have a negligible influence on runoff response, with the exception of lag times in some of the larger basins. This highlights the importance of accuracy in rainfall estimation: getting the position right but the volume wrong will inevitably lead to large errors in runoff prediction. Our study helps to identify conditions where rainfall variability matters for correct estimation of the rainfall volume as well as the associated runoff response.

  17. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  18. Reaction monitoring using hyperpolarized NMR with scaling of heteronuclear couplings by optimal tracking

    Zhang, Guannan; Schilling, Franz; Glaser, Steffen J.; Hilty, Christian

    2016-11-01

    Off-resonance decoupling using the method of Scaling of Heteronuclear Couplings by Optimal Tracking (SHOT) enables determination of heteronuclear correlations of chemical shifts in single scan NMR spectra. Through modulation of J-coupling evolution by shaped radio frequency pulses, off resonance decoupling using SHOT pulses causes a user-defined dependence of the observed J-splitting, such as the splitting of 13C peaks, on the chemical shift offset of coupled nuclei, such as 1H. Because a decoupling experiment requires only a single scan, this method is suitable for characterizing on-going chemical reactions using hyperpolarization by dissolution dynamic nuclear polarization (D-DNP). We demonstrate the calculation of [13C, 1H] chemical shift correlations of the carbanionic active sites from hyperpolarized styrene polymerized using sodium naphthalene as an initiator. While off resonance decoupling by SHOT pulses does not enhance the resolution in the same way as a 2D NMR spectrum would, the ability to obtain the correlations in single scans makes this method ideal for determination of chemical shifts in on-going reactions on the second time scale. In addition, we present a novel SHOT pulse that allows to scale J-splittings 50% larger than the respective J-coupling constant. This feature can be used to enhance the resolution of the indirectly detected chemical shift and reduce peak overlap, as demonstrated in a model reaction between p-anisaldehyde and isobutylamine. For both pulses, the accuracy is evaluated under changing signal-to-noise ratios (SNR) of the peaks from reactants and reaction products, with an overall standard deviation of chemical shift differences compared to reference spectra of 0.02 ppm when measured on a 400 MHz NMR spectrometer. Notably, the appearance of decoupling side-bands, which scale with peak intensity, appears to be of secondary importance.

  19. Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes

    Mitra, Sumit

    With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with

  20. Design and Optimization of Fast Switching Valves for Large Scale Digital Hydraulic Motors

    Roemer, Daniel Beck

    The present thesis is on the design, analysis and optimization of fast switching valves for digital hydraulic motors with high power ratings. The need for such high power motors origins in the potential use of hydrostatic transmissions in wind turbine drive trains, as digital hydraulic machines...... have been shown to improve the overall efficiency and efficient operation range compared to traditional hydraulic machines. Digital hydraulic motors uses electronically controlled independent seat valves connected to the pressure chambers, which must be fast acting and exhibit low pressure losses...... to enable efficient operation. These valves are complex components to design, as multiple design aspects are present in these integrated valve units, with conflicting objectives and interdependencies. A preliminary study on a small scale single-cylinder digital hydraulic pump has initially been conducted...

  1. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  2. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  3. SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale

    Meng, Jintao; Seo, Sangmin; Balaji, Pavan; Wei, Yanjie; Wang, Bingqiang; Feng, Shengzhong

    2016-08-16

    In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. In k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.

  4. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  5. Myosin-II sets the optimal response time scale of chemotactic amoeba

    Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Bodenschatz, Eberhard; Beta, Carsten

    2014-03-01

    The response dynamics of the actin cytoskeleton to external chemical stimuli plays a fundamental role in numerous cellular functions. One of the key players that governs the dynamics of the actin network is the motor protein myosin-II. Here we investigate the role of myosin-II in the response of the actin system to external stimuli. We used a microfluidic device in combination with a photoactivatable chemoattractant to apply stimuli to individual cells with high temporal resolution. We directly compare the actin dynamics in Dictyostelium discodelium wild type (WT) cells to a knockout mutant that is deficient in myosin-II (MNL). Similar to the WT a small population of MNL cells showed self-sustained oscillations even in absence of external stimuli. The actin response of MNL cells to a short pulse of chemoattractant resembles WT during the first 15 sec but is significantly delayed afterward. The amplitude of the dominant peak in the power spectrum from the response time series of MNL cells to periodic stimuli with varying period showed a clear resonance peak at a forcing period of 36 sec, which is significantly delayed as compared to the resonance at 20 sec found for the WT. This shift indicates an important role of myosin-II in setting the response time scale of motile amoeba. Institute of Physics und Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany.

  6. Investigating fine-scale spatio-temporal predator-prey patterns in dynamic marine ecosystems: a functional data analysis approach

    Embling, C.B.; Illian, J.; Armstrong, E.; van der Kooij, J.; Sharples, J.; Camphuysen, K.C.J.; Scott, B.E.

    2012-01-01

    1. Spatial management of marine ecosystems requires detailed knowledge of spatio-temporal mechanisms linking physical and biological processes. Tidal currents, the main driver of ecosystem dynamics in temperate coastal ecosystems, influence predator foraging ecology by affecting prey distribution

  7. Optimization Study of Small-Scale Solar Membrane Distillation Desalination Systems (s-SMDDS

    Hsuan Chang

    2014-11-01

    Full Text Available Membrane distillation (MD, which can utilize low-grade thermal energy, has been extensively studied for desalination. By incorporating solar thermal energy, the solar membrane distillation desalination system (SMDDS is a potential technology for resolving energy and water resource problems. Small-scale SMDDS (s-SMDDS is an attractive and viable option for the production of fresh water for small communities in remote arid areas. The minimum cost design and operation of s-SMDDS are determined by a systematic method, which involves a pseudo-steady-state approach for equipment sizing and dynamic optimization using overall system mathematical models. Two s-SMDDS employing an air gap membrane distillation module with membrane areas of 11.5 m2 and 23 m2 are analyzed. The lowest water production costs are $5.92/m3 and $5.16/m3 for water production rates of 500 kg/day and 1000 kg/day, respectively. For these two optimal cases, the performance ratios are 0.85 and 0.91; the recovery ratios are 4.07% and 4.57%. The effect of membrane characteristics on the production cost is investigated. For the commercial membrane employed in this study, the increase of the membrane mass transfer coefficient up to two times is beneficial for cost reduction.

  8. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  9. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  10. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    Salmon

    2012-07-01

    Full Text Available stream_source_info Salmon1_2012_ABSTRACT ONLY.pdf.txt stream_content_type text/plain stream_size 1654 Content-Encoding ISO-8859-1 stream_name Salmon1_2012_ABSTRACT ONLY.pdf.txt Content-Type text/plain; charset=ISO-8859...-1 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22-27 July 2012 A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images yzB.P. Salmon, yz...

  11. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  12. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  13. Optimization and Scale-Up of Coffee Mucilage Fermentation for Ethanol Production

    David Orrego

    2018-03-01

    Full Text Available Coffee, one of the most popular food commodities and beverage ingredients worldwide, is considered as a potential source for food industry and second-generation biofuel due to its various by-products, including mucilage, husk, skin (pericarp, parchment, silver-skin, and pulp, which can be produced during the manufacturing process. A number of research studies have mainly investigated the valuable properties of brewed coffee (namely, beverage, functionalities, and its beneficial effects on cognitive and physical performances; however, other residual by-products of coffee, such as its mucilage, have rarely been studied. In this manuscript, the production of bioethanol from mucilage was performed both in shake flasks and 5 L bio-reactors. The use of coffee mucilage provided adequate fermentable sugars, primarily glucose with additional nutrient components, and it was directly fermented into ethanol using a Saccharomyces cerevisiae strain. The initial tests at the lab scale were evaluated using a two-level factorial experimental design, and the resulting optimal conditions were applied to further tests at the 5 L bio-reactor for scale up. The highest yields of flasks and 5 L bio-reactors were 0.46 g ethanol/g sugars, and 0.47 g ethanol/g sugars after 12 h, respectively, which were equal to 90% and 94% of the theoretically achievable conversion yield of ethanol.

  14. Temporal stability of soil moisture under different land uses/cover in the Loess Plateau based on a finer spatiotemporal scale

    Zhou, J.; Fu, B. J.; Lü, N.; Gao, G. Y.; Lü, Y. H.; Wang, S.

    2013-01-01

    The Temporal stability of soil moisture (TSSM) is an important factor to evaluate the value of available water resources in a water-controlled ecosystem. In this study we used the evapotranspiration-TSSM (ET-TSSM) model and a new sampling design to examine the soil water dynamics and water balance of different land uses/cover types in a hilly landscape of the Loess Plateau under a finer spatiotemporal scale. Our primary focus is to examine the difference amo...

  15. Temporal variability of the NPP-GPP ratio at seasonal and interannual time scales in a temperate beech forest

    M. Campioli

    2011-09-01

    Full Text Available The allocation of carbon (C taken up by the tree canopy for respiration and production of tree organs with different construction and maintenance costs, life span and decomposition rate, crucially affects the residence time of C in forests and their C cycling rate. The carbon-use efficiency, or ratio between net primary production (NPP and gross primary production (GPP, represents a convenient way to analyse the C allocation at the stand level. In this study, we extend the current knowledge on the NPP-GPP ratio in forests by assessing the temporal variability of the NPP-GPP ratio at interannual (for 8 years and seasonal (for 1 year scales for a young temperate beech stand, reporting dynamics for both leaves and woody organs, in particular stems. NPP was determined with biometric methods/litter traps, whereas the GPP was estimated via the eddy covariance micrometeorological technique.

    The interannual variability of the proportion of C allocated to leaf NPP, wood NPP and leaf plus wood NPP (on average 11% yr−1, 29% yr−1 and 39% yr−1, respectively was significant among years with up to 12% yr−1 variation in NPP-GPP ratio. Studies focusing on the comparison of NPP-GPP ratio among forests and models using fixed allocation schemes should take into account the possibility of such relevant interannual variability. Multiple linear regressions indicated that the NPP-GPP ratio of leaves and wood significantly correlated with environmental conditions. Previous year drought and air temperature explained about half of the NPP-GPP variability of leaves and wood, respectively, whereas the NPP-GPP ratio was not decreased by severe drought, with large NPP-GPP ratio on 2003 due mainly to low GPP. During the period between early May and mid June, the majority of GPP was allocated to leaf and stem NPP, whereas these sinks were of little importance later on. Improved estimation of seasonal GPP and of the

  16. Static and dynamic controls on fire activity at moderate spatial and temporal scales in the Alaskan boreal forest

    Barrett, Kirsten; Loboda, Tatiana; McGuire, A. David; Genet, Hélène; Hoy, Elizabeth; Kasischke, Eric

    2016-01-01

    Wildfire, a dominant disturbance in boreal forests, is highly variable in occurrence and behavior at multiple spatiotemporal scales. New data sets provide more detailed spatial and temporal observations of active fires and the post-burn environment in Alaska. In this study, we employ some of these new data to analyze variations in fire activity by developing three explanatory models to examine the occurrence of (1) seasonal periods of elevated fire activity using the number of MODIS active fire detections data set (MCD14DL) within an 11-day moving window, (2) unburned patches within a burned area using the Monitoring Trends in Burn Severity fire severity product, and (3) short-to-moderate interval (fires using areas of burned area overlap in the Alaska Large Fire Database. Explanatory variables for these three models included dynamic variables that can change over the course of the fire season, such as weather and burn date, as well as static variables that remain constant over a fire season, such as topography, drainage, vegetation cover, and fire history. We found that seasonal periods of high fire activity are associated with both seasonal timing and aggregated weather conditions, as well as the landscape composition of areas that are burning. Important static inputs to the model of seasonal fire activity indicate that when fire weather conditions are suitable, areas that typically resist fire (e.g., deciduous stands) may become more vulnerable to burning and therefore less effective as fire breaks. The occurrence of short-to-moderate interval fires appears to be primarily driven by weather conditions, as these were the only relevant explanatory variables in the model. The unique importance of weather in explaining short-to-moderate interval fires implies that fire return intervals (FRIs) will be sensitive to projected climate changes in the region. Unburned patches occur most often in younger stands, which may be related to a greater deciduous fraction of

  17. Double optimization of Xe(L) amplifier power scaling at λ ∼ 2.9 A

    Borisov, Alex B; Song, Xiangyang; Zhang Ping; McCorkindale, John C; Khan, Shahab F; Poopalasingam, Sankar; Zhao Ji; Dai Yang; Rhodes, Charles K

    2007-01-01

    The spectral and spatial characteristics of the Xe(L) amplifier at λ ∼ 2.9 A determine an optimum for the scaling of the peak power with channel length. The Xe 31+ and Xe 32+ (3d → 2p) transition arrays represent two identical spectral optima for amplification, a property stemming from the extremum of spectral components (3245) characteristic of their electron configurations. Adroit matching of the spatial distribution of the intensity characteristic of the propagating 248 nm pulse dynamically generating the self-trapped plasma channel with the intensity required to excite selectively and efficiently the Xe 31+ and Xe 32+ arrays can also simultaneously maximize the spatial volume of the excitation. The net outcome of this double maximization is an amplifying channel for the optimal transitions that possesses high gain (∼100 cm -1 ), low losses ( -1 cm -1 ) and a diameter of 15-20 μm, a size sufficient to produce an x-ray pulse energy of ∼50-100 mJ from a channel having an average xenon density of ∼10 20 cm -3 and a length of 1 cm. Since previous studies have experimentally demonstrated the ability to produce a saturated bandwidth of ∼60 eV, a magnitude sufficient to support a pulse duration of ∼30 as, peak powers P x >> 1 PW are clearly within the scaling limits of the Xe(L) system. The corresponding peak brightness scaling limit is accordingly bounded from below by P x /λ 2 ≅ 10 30 W cm -2 sr -1 . (fast track communication)

  18. Pilot scale production, characterization, and optimization of epoxidized vegetable oil-based resins

    Monono, Ewumbua Menyoli

    Novel epoxidized sucrose soyate (ESS) resins perform much better than other vegetable oil-based resins; thus, they are of current interest for commercial scale production and for a wide range of applications in coatings and polymeric materials. However, no work has been published that successfully scaled-up the reaction above a 1 kg batch size. To achieve this goal, canola oil was first epoxidized at a 300 g scale to study the epoxidation rate and thermal profile at different hydrogen peroxide (H2O2) addition rates, bath temperatures, and reaction times. At least 83% conversion of double bonds to oxirane was achieved by 2.5 h, and the reaction temperature was 8-15 °C higher than the water bath temperature within the first 30-40 min of epoxidation. A 38 L stainless steel kettle was modified as a reactor to produce 10 kg of ESS. Twenty 7-10 kg batches of ESS were produced with an overall 87.5% resin yield and > 98% conversion after batch three. The conversion and resin quality were consistent across the batches due to the modifications on the reaction that improved mixing and reaction temperature control within 55-65 oC. The total production time was reduced from 8 to 4 days due to the fabrication of a 40 L separatory funnel for both washing and filtration. A math model was developed to optimize the epoxidation process. This was done by using the Box-Behnken design to model the conversion at various acetic acid, H2O2, and Amberlite ratios and at various reaction temperatures and times. The model had an adjusted R2 of 97.6% and predicted R2 of 96.8%. The model showed that reagent amounts and time can be reduced by 18% without compromising the desired conversion value and quality.

  19. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  20. Optimized Placement of Wind Turbines in Large-Scale Offshore Wind Farm using Particle Swarm Optimization Algorithm

    Hou, Peng; Hu, Weihao; Soltani, Mohsen

    2015-01-01

    With the increasing size of wind farm, the impact of the wake effect on wind farm energy yields become more and more evident. The arrangement of the wind turbines’ (WT) locations will influence the capital investment and contribute to the wake losses which incur the reduction of energy production....... As a consequence, the optimized placement of the wind turbines may be done by considering the wake effect as well as the components cost within the wind farm. In this paper, a mathematical model which includes the variation of both wind direction and wake deficit is proposed. The problem is formulated by using...... Levelized Production Cost (LPC) as the objective function. The optimization procedure is performed by Particle Swarm Optimization (PSO) algorithm with the purpose of maximizing the energy yields while minimizing the total investment. The simulation results indicate that the proposed method is effective...

  1. Evaluation on Optimal Scale of Rural Fixed-asset Investment-Based on Microcosmic Perspective of Farmers’ Income Increase

    Jinqian; DENG; Kangkang; SHAN; Yan; ZHANG

    2014-01-01

    The rural fundamental and productive fixed-asset investment not only makes active influence on the changes of farmers’ operational,wages and property income,but it also has an optimal scale range for farmers’ income increase. From the perspective of farmers’ income increase,this article evaluates the optimal scale of rural fixed-asset investment by setting up model with statistic data,and the results show that the optimal scale of per capita rural fixed-asset investment is 76. 35% of per capita net income of rural residents,which has been reached in China in 2009. Therefore,compared with the adding of rural fixed-asset investment,a better income increase effect can be achieved through the adjustment of rural fixed-asset investment structure.

  2. Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements

    Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.

    2016-01-01

    The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under

  3. How well do the GCMs/RCMs capture the multi-scale temporal variability of precipitation in the Southwestern United States?

    Jiang, Peng; Gautam, Mahesh R.; Zhu, Jianting; Yu, Zhongbo

    2013-02-01

    SummaryMulti-scale temporal variability of precipitation has an established relationship with floods and droughts. In this paper, we present the diagnostics on the ability of 16 General Circulation Models (GCMs) from Bias Corrected and Downscaled (BCSD) World Climate Research Program's (WCRP's) Coupled Model Inter-comparison Project Phase 3 (CMIP3) projections and 10 Regional Climate Models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to represent multi-scale temporal variability determined from the observed station data. Four regions (Los Angeles, Las Vegas, Tucson, and Cimarron) in the Southwest United States are selected as they represent four different precipitation regions classified by clustering method. We investigate how storm properties and seasonal, inter-annual, and decadal precipitation variabilities differed between GCMs/RCMs and observed records in these regions. We find that current GCMs/RCMs tend to simulate longer storm duration and lower storm intensity compared to those from observed records. Most GCMs/RCMs fail to produce the high-intensity summer storms caused by local convective heat transport associated with the summer monsoon. Both inter-annual and decadal bands are present in the GCM/RCM-simulated precipitation time series; however, these do not line up to the patterns of large-scale ocean oscillations such as El Nino/La Nina Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Our results show that the studied GCMs/RCMs can capture long-term monthly mean as the examined data is bias-corrected and downscaled, but fail to simulate the multi-scale precipitation variability including flood generating extreme events, which suggests their inadequacy for studies on floods and droughts that are strongly associated with multi-scale temporal precipitation variability.

  4. Analysis of the historical precipitation in the South East Iberian Peninsula at different spatio-temporal scale. Study of the meteorological drought

    Fernández-Chacón, Francisca; Pulido-Velazquez, David; Jiménez-Sánchez, Jorge; Luque-Espinar, Juan Antonio

    2017-04-01

    Precipitation is a fundamental climate variable that has a pronounced spatial and temporal variability on a global scale, as well as at regional and sub-regional scales. Due to its orographic complexity and its latitude the Iberian Peninsula (IP), located to the west of the Mediterranean Basin between the Atlantic Ocean and the Mediterranean Sea, has a complex climate. Over the peninsula there are strong north-south and east-west gradients, as a consequence of the different low-frequency atmospheric patterns, and he overlap of these over the year will be determinants in the variability of climatic variables. In the southeast of the Iberian Peninsula dominates a dry Mediterranean climate, the precipitation is characterized as being an intermittent and discontinuous variable. In this research information coming from the Spain02 v4 database was used to study the South East (SE) IP for the 1971-2010 period with a spatial resolution of 0.11 x 0.11. We analysed precipitation at different time scale (daily, monthly, seasonal, annual,…) to study the spatial distribution and temporal tendencies. The high spatial, intra-annual and inter-annual climatic variability observed makes it necessary to propose a climatic regionalization. In addition, for the identified areas and subareas of homogeneous climate we have analysed the evolution of the meteorological drought for the same period at different time scales. The standardized precipitation index has been used at 12, 24 and 48 month temporal scale. The climatic complexity of the area determines a high variability in the drought characteristics, duration, intensity and frequency in the different climatic areas. This research has been supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 project for the data provided for this study.

  5. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem

  6. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  7. Trading river services: optimizing dam decisions at the basin scale to improve socio-ecological resilience

    Roy, S. G.; Gold, A.; Uchida, E.; McGreavy, B.; Smith, S. M.; Wilson, K.; Blachly, B.; Newcomb, A.; Hart, D.; Gardner, K.

    2017-12-01

    Dam removal has become a cornerstone of environmental restoration practice in the United States. One outcome of dam removal that has received positive attention is restored access to historic habitat for sea-run fisheries, providing a crucial gain in ecosystem resilience. But dams also provide stakeholders with valuable services, and uncertain socio-ecological outcomes can arise if there is not careful consideration of the basin scale trade offs caused by dam removal. In addition to fisheries, dam removals can significantly affect landscape nutrient flux, municipal water storage, recreational use of lakes and rivers, property values, hydroelectricity generation, the cultural meaning of dams, and many other river-based ecosystem services. We use a production possibility frontiers approach to explore dam decision scenarios and opportunities for trading between ecosystem services that are positively or negatively affected by dam removal in New England. Scenarios that provide efficient trade off potentials are identified using a multiobjective genetic algorithm. Our results suggest that for many river systems, there is a significant potential to increase the value of fisheries and other ecosystem services with minimal dam removals, and further increases are possible by including decisions related to dam operations and physical modifications. Run-of-river dams located near the head of tide are often found to be optimal for removal due to low hydroelectric capacity and high impact on fisheries. Conversely, dams with large impoundments near a river's headwaters can be less optimal for dam removal because their value as nitrogen sinks often outweighs the potential value for fisheries. Hydropower capacity is negatively impacted by dam removal but there are opportunities to meet or exceed lost capacity by upgrading preserved hydropower dams. Improving fish passage facilities for dams that are critical for safety or water storage can also reduce impacts on fisheries. Our

  8. Comparison of IMP-SPECT findings to subtest scores of Wechsler intelligence adult Scale-Revised in temporal lobe epilepsy patients

    Kan, Rumiko; Uejima, Masahiko; Kaneko, Yuko; Miyamoto, Yuriko; Watabe, Manabu; Takahashi, Ruriko; Niwa, Shin-ichi; Shishido, Fumio [Fukushima Medical Coll. (Japan)

    1998-02-01

    In this study, 40 temporal lobe epilepsy patients were assessed, using the Laterality Index (LI) of ROI values in IMP-SPECT findings, Wechsler adult intelligence Scale-Revised (WAIS-R) and subtest scores. LIs of the frontal, temporal and occipital lobes were calculated as follows: the ROI values on the right side were subtracted from those on the left, and the results was divided by the sum of the ROI values on the right and left sides. The individual subtest scores on WAIS-R were standardized by all evaluation scores in order to exclude the influence of differences in intelligence level as much as possible. The results were as follows: there was a positive correlation (r=0.74, p<0.001) between LI values and the performance in Arithmetic in the left temporal lobe hypoperfusion group. And there was a positive correlation (r=0.50, p<0.02) between LI values and the performance in Vocabulary in the left temporal lobe hypoperfusion group. In the right occipital lobe hypoperfusion group, there was a negative correlation (r=-O.44, p

  9. Comparison of IMP-SPECT findings to subtest scores of Wechsler intelligence adult Scale-Revised in temporal lobe epilepsy patients

    Kan, Rumiko; Uejima, Masahiko; Kaneko, Yuko; Miyamoto, Yuriko; Watabe, Manabu; Takahashi, Ruriko; Niwa, Shin-ichi; Shishido, Fumio

    1998-01-01

    In this study, 40 temporal lobe epilepsy patients were assessed, using the Laterality Index (LI) of ROI values in IMP-SPECT findings, Wechsler adult intelligence Scale-Revised (WAIS-R) and subtest scores. LIs of the frontal, temporal and occipital lobes were calculated as follows: the ROI values on the right side were subtracted from those on the left, and the results was divided by the sum of the ROI values on the right and left sides. The individual subtest scores on WAIS-R were standardized by all evaluation scores in order to exclude the influence of differences in intelligence level as much as possible. The results were as follows: there was a positive correlation (r=0.74, p<0.001) between LI values and the performance in Arithmetic in the left temporal lobe hypoperfusion group. And there was a positive correlation (r=0.50, p<0.02) between LI values and the performance in Vocabulary in the left temporal lobe hypoperfusion group. In the right occipital lobe hypoperfusion group, there was a negative correlation (r=-O.44, p< O.05) between LI values and the performance in Coding. It is suggested that decreased blood flow areas detected by SPECT might influence brain function. (author)

  10. Participatory Bluetooth Sensing: A Method for Acquiring Spatio-Temporal Data about Participant Mobility and Interactions at Large Scale Events

    Stopczynski, Arkadiusz; Larsen, Jakob Eg; Jørgensen, Sune Lehmann

    2013-01-01

    for collecting spatio-temporal data about participant mobility and social interactions uses the capabilities of Bluetooth capable smartphones carried by participants. As a proof-of-concept we present a field study with deployment of the method in a large music festival with 130 000 participants where a small...

  11. Application of bivariate mapping for hydrological classification and analysis of temporal change and scale effects in Switzerland

    Speich, Matthias J.R.; Bernhard, Luzi; Teuling, Ryan; Zappa, Massimiliano

    2015-01-01

    Hydrological classification schemes are important tools for assessing the impacts of a changing climate on the hydrology of a region. In this paper, we present bivariate mapping as a simple means of classifying hydrological data for a quantitative and qualitative assessment of temporal change.

  12. Optimizing Cr(VI) and Tc(VII) remediation through nano-scale biomineral engineering

    Cutting, R.S.; Coker, V.S.; Telling, N.D.; Kimber, R.L.; Pearce, C.I.; Ellis, B.; Lawson, R; van der Laan, G.; Pattrick, R.A.D.; Vaughan, D.J.; Arenholz, E.; Lloyd, J.R.

    2009-01-01

    To optimize the production of biomagnetite for the bioremediation of metal oxyanion contaminated waters, the reduction of aqueous Cr(VI) to Cr(III) by two biogenic magnetites and a synthetic magnetite was evaluated under batch and continuous flow conditions. Results indicate that nano-scale biogenic magnetite produced by incubating synthetic schwertmannite powder in cell suspensions of Geobacter sulfurreducens is more efficient at reducing Cr(VI) than either biogenic nano-magnetite produced from a suspension of ferrihydrite 'gel' or synthetic nano-scale Fe 3 O 4 powder. Although X-ray Photoelectron Spectroscopy (XPS) measurements obtained from post-exposure magnetite samples reveal that both Cr(III) and Cr(VI) are associated with nanoparticle surfaces, X-ray Magnetic Circular Dichroism (XMCD) studies indicate that some Cr(III) has replaced octahedrally coordinated Fe in the lattice of the magnetite. Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) measurements of total aqueous Cr in the associated solution phase indicated that, although the majority of Cr(III) was incorporated within or adsorbed to the magnetite samples, a proportion (∼10-15 %) was released back into solution. Studies of Tc(VII) uptake by magnetites produced via the different synthesis routes also revealed significant differences between them as regards effectiveness for remediation. In addition, column studies using a γ-camera to obtain real time images of a 99m Tc(VII) radiotracer were performed to visualize directly the relative performances of the magnetite sorbents against ultra-trace concentrations of metal oxyanion contaminants. Again, the magnetite produced from schwertmannite proved capable of retaining more (∼20%) 99m Tc(VII) than the magnetite produced from ferrihydrite, confirming that biomagnetite production for efficient environmental remediation can be fine-tuned through careful selection of the initial Fe(III) mineral substrate supplied to Fe

  13. Optimizing Cr(VI) and Tc(VII) remediation through nano-scale biomineral engineering

    Cutting, R. S.; Coker, V. S.; Telling, N. D.; Kimber, R. L.; Pearce, C. I.; Ellis, B.; Lawson, R; van der Laan, G.; Pattrick, R.A.D.; Vaughan, D.J.; Arenholz, E.; Lloyd, J. R.

    2009-09-09

    To optimize the production of biomagnetite for the bioremediation of metal oxyanion contaminated waters, the reduction of aqueous Cr(VI) to Cr(III) by two biogenic magnetites and a synthetic magnetite was evaluated under batch and continuous flow conditions. Results indicate that nano-scale biogenic magnetite produced by incubating synthetic schwertmannite powder in cell suspensions of Geobacter sulfurreducens is more efficient at reducing Cr(VI) than either biogenic nano-magnetite produced from a suspension of ferrihydrite 'gel' or synthetic nano-scale Fe{sub 3}O{sub 4} powder. Although X-ray Photoelectron Spectroscopy (XPS) measurements obtained from post-exposure magnetite samples reveal that both Cr(III) and Cr(VI) are associated with nanoparticle surfaces, X-ray Magnetic Circular Dichroism (XMCD) studies indicate that some Cr(III) has replaced octahedrally coordinated Fe in the lattice of the magnetite. Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) measurements of total aqueous Cr in the associated solution phase indicated that, although the majority of Cr(III) was incorporated within or adsorbed to the magnetite samples, a proportion ({approx}10-15 %) was released back into solution. Studies of Tc(VII) uptake by magnetites produced via the different synthesis routes also revealed significant differences between them as regards effectiveness for remediation. In addition, column studies using a {gamma}-camera to obtain real time images of a {sup 99m}Tc(VII) radiotracer were performed to visualize directly the relative performances of the magnetite sorbents against ultra-trace concentrations of metal oxyanion contaminants. Again, the magnetite produced from schwertmannite proved capable of retaining more ({approx}20%) {sup 99m}Tc(VII) than the magnetite produced from ferrihydrite, confirming that biomagnetite production for efficient environmental remediation can be fine-tuned through careful selection of the initial Fe(III) mineral

  14. Entropy Production of Emerging Turbulent Scales in a Temporal Supercritical N-Neptane/Nitrogen Three-Dimensional Mixing Layer

    Bellan, J.; Okongo, N.

    2000-01-01

    A study of emerging turbulent scales entropy production is conducted for a supercritical shear layer as a precursor to the eventual modeling of Subgrid Scales (from a turbulent state) leading to Large Eddy Simulations.

  15. Meta-optimization of the extended kalman filter's parameters for improved feature extraction on hyper-temporal images

    Salmon, BP

    2011-07-01

    Full Text Available . This paper proposes a meta-optimization approach for setting the parameters of the non-linear Extended Kalman Filter to rapidly and efficiently estimate the features for the pair of triply modulated cosine functions. The approach is based on a unsupervised...

  16. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  17. Chemical Reactor Automation as a way to Optimize a Laboratory Scale Polymerization Process

    Cruz-Campa, Jose L.; Saenz de Buruaga, Isabel; Lopez, Raymundo

    2004-10-01

    The automation of the registration and control of variables involved in a chemical reactor improves the reaction process by making it faster, optimized and without the influence of human error. The objective of this work is to register and control the involved variables (temperatures, reactive fluxes, weights, etc) in an emulsion polymerization reaction. The programs and control algorithms were developed in the language G in LabVIEW®. The designed software is able to send and receive RS232 codified data from the devices (pumps, temperature sensors, mixer, balances, and so on) to and from a personal Computer. The transduction from digital information to movement or measurement actions of the devices is done by electronic components included in the devices. Once the programs were done and proved, chemical reactions of emulsion polymerization were made to validate the system. Moreover, some advanced heat-estimation algorithms were implemented in order to know the heat caused by the reaction and the estimation and control of chemical variables in-line. All the information gotten from the reaction is stored in the PC. The information is then available and ready to use in any commercial data processor software. This work is now being used in a Research Center in order to make emulsion polymerizations under efficient and controlled conditions with reproducible results. The experiences obtained from this project may be used in the implementation of chemical estimation algorithms at pilot plant or industrial scale.

  18. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  19. High performance Spark best practices for scaling and optimizing Apache Spark

    Karau, Holden

    2017-01-01

    Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing. With this book, you’ll explore: How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure The choice between data joins in Core Spark and Spark SQL Techniques for getting the most out of standard RDD transformations How to work around performance issues i...

  20. Optimization and scale up of trickling bed bioreactors for degradation of volatile organic substances

    Schindler, I.

    1996-01-01

    For optimization and scale up of trickling bed bioreactors used in waste gas cleaning following investigations were made: the degradation of toluene was measured in reactors with various volumes and diameter to high ratios. The degradation of toluene was investigated in bioreactors with different carrier materials. It turned out, that the increase of the elimination capacity with the height of the reactor depends on the carrier material. At low gas velocities PU-foam allows higher elimination capacities than pallrings, VSP and DINPAC. On the other hand for PU-foam there is a permanent danger of clogging. The other materials allowed a stable operation for several months. Mass transfer of toluene was studied by absorption experiments in a 100 litre plant without microorganisms. The experiments lead to a henry coefficient of 0,23 (kg/m3)g/(kg/m3)l. Mass transfer coefficients were calculated between 3,6 and 5,2 depending an the space velocity of the gas and the trickling density of the water phase. The degradation of ethyl acetate, toluene and heptane was investigated considering the different water solubility of these substances. Further on degradation of toluene and heptane in several mixtures was investigated. (author)

  1. Optimal Multiuser Diversity in Multi-Cell MIMO Uplink Networks: User Scaling Law and Beamforming Design

    Bang Chul Jung

    2017-07-01

    Full Text Available We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO uplink network, referred to as a MIMO interfering multiple-access channel (IMAC. Assuming both no information exchange among base stations (BS and local channel state information at the transmitters for the MIMO IMAC, we propose a joint beamforming and user scheduling protocol, and then show that the proposed protocol can achieve the optimal multiuser diversity gain, i.e., KMlog(SNRlog N, as long as the number of mobile stations (MSs in a cell, N, scales faster than SNR K M − L 1 − ϵ for a small constant ϵ > 0, where M, L, K, and SNR denote the number of receive antennas at each BS, the number of transmit antennas at each MS, the number of cells, and the signal-to-noise ratio, respectively. Our result indicates that multiuser diversity can be achieved in the presence of intra-cell and inter-cell interference even in a distributed fashion. As a result, vital information on how to design distributed algorithms in interference-limited cellular environments is provided.

  2. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    Mohseni, M. [Google Research, Venice, California 90291 (United States); Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Shabani, A. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Department of Chemistry, University of California at Berkeley, Berkeley, California 94720 (United States); Lloyd, S. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Rabitz, H. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)

    2014-01-21

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k{sub B}λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap.

  3. Pretreatment of wastewater: Optimal coagulant selection using Partial Order Scaling Analysis (POSA)

    Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos

    2011-01-01

    Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment.

  4. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    Mohseni, M.; Shabani, A.; Lloyd, S.; Rabitz, H.

    2014-01-01

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k B λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap

  5. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  6. Modelling and optimal operation of a small-scale integrated energy based district heating and cooling system

    Jing, Z.X.; Jiang, X.S.; Wu, Q.H.; Tang, W.H.; Hua, B.

    2014-01-01

    This paper presents a comprehensive model of a small-scale integrated energy based district heating and cooling (DHC) system located in a residential area of hot-summer and cold-winter zone, which makes joint use of wind energy, solar energy, natural gas and electric energy. The model includes an off-grid wind turbine generator, heat producers, chillers, a water supply network and terminal loads. This research also investigates an optimal operating strategy based on Group Search Optimizer (GSO), through which the daily running cost of the system is optimized in both the heating and cooling modes. The strategy can be used to find the optimal number of operating chillers, optimal outlet water temperature set points of boilers and optimal water flow set points of pumps, taking into account cost functions and various operating constraints. In order to verify the model and the optimal operating strategy, performance tests have been undertaken using MATLAB. The simulation results prove the validity of the model and show that the strategy is able to minimize the system operation cost. The proposed system is evaluated in comparison with a conventional separation production (SP) system. The feasibility of investment for the DHC system is also discussed. The comparative results demonstrate the investment feasibility, the significant energy saving and the cost reduction, achieved in daily operation in an environment, where there are varying heating loads, cooling loads, wind speeds, solar radiations and electricity prices. - Highlights: • A model of a small-scale integrated energy based DHC system is presented. • An off-grid wind generator used for water heating is embedded in the model. • An optimal control strategy is studied to optimize the running cost of the system. • The designed system is proved to be energy efficient and cost effective in operation

  7. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain

  8. Improved decomposition–coordination and discrete differential dynamic programming for optimization of large-scale hydropower system

    Li, Chunlong; Zhou, Jianzhong; Ouyang, Shuo; Ding, Xiaoling; Chen, Lu

    2014-01-01

    Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem

  9. Fenton chemistry-based detemplation of an industrially relevant microcrystalline beta zeolite. Optimization and scaling-up studies

    Ortiz-Iniesta, Maria Jesus; Melian-Cabrera, Ignacio

    A mild template removal of microcrystalline beta zeolite, based on Fenton chemistry, was optimized. Fenton detemplation was studied in terms of applicability conditions window, reaction rate and scale up. TGA and CHN elemental analysis were used to evaluate the detemplation effectiveness, while 'CP,

  10. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  11. Use of a handheld low-cost sensor to explore the effect of urban design features on local-scale spatial and temporal air quality variability.

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-01

    Portable low-cost instruments have been validated and used to measure ambient nitrogen dioxide (NO 2 ) at multiple sites over a small urban area with 20min time resolution. We use these results combined with land use regression (LUR) and rank correlation methods to explore the effects of traffic, urban design features, and local meteorology and atmosphere chemistry on small-scale spatio-temporal variations. We measured NO 2 at 45 sites around the downtown area of Vancouver, BC, in spring 2016, and constructed four different models: i) a model based on averaging concentrations observed at each site over the whole measurement period, and separate temporal models for ii) morning, iii) midday, and iv) afternoon. Redesign of the temporal models using the average model predictors as constants gave three 'hybrid' models that used both spatial and temporal variables. These accounted for approximately 50% of the total variation with mean absolute error±5ppb. Ranking sites by concentration and by change in concentration across the day showed a shift of high NO 2 concentrations across the central city from morning to afternoon. Locations could be identified in which NO 2 concentration was determined by the geography of the site, and others as ones in which the concentration changed markedly from morning to afternoon indicating the importance of temporal controls. Rank correlation results complemented LUR in identifying significant urban design variables that impacted NO 2 concentration. High variability across a relatively small space was partially described by predictor variables related to traffic (bus stop density, speed limits, traffic counts, distance to traffic lights), atmospheric chemistry (ozone, dew point), and environment (land use, trees). A high-density network recording continuously would be needed fully to capture local variations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Detecting small-scale spatial differences and temporal dynamics of soil organic carbon (SOC) stocks: a comparison between automatic chamber-derived C budgets and repeated soil inventories

    Hoffmann, Mathias; Jurisch, Nicole; Garcia Alba, Juana; Albiac Borraz, Elisa; Schmidt, Marten; Huth, Vytas; Rogasik, Helmut; Rieckh, Helene; Verch, Gernot; Sommer, Michael; Augustin, Jürgen

    2017-04-01

    Carbon (C) sequestration in soils plays a key role in the global C cycle. It is therefore crucial to adequately monitor dynamics in soil organic carbon (ΔSOC) stocks when aiming to reveal underlying processes and potential drivers. However, small-scale spatial and temporal changes in SOC stocks, particularly pronounced on arable lands, are hard to assess. The main reasons for this are limitations of the well-established methods. On the one hand, repeated soil inventories, often used in long-term field trials, reveal spatial patterns and trends in ΔSOC but require a longer observation period and a sufficient number of repetitions. On the other hand, eddy covariance measurements of C fluxes towards a complete C budget of the soil-plant-atmosphere system may help to obtain temporal ΔSOC patterns but lack small-scale spatial resolution. To overcome these limitations, this study presents a reliable method to detect both short-term temporal as well as small-scale spatial dynamics of ΔSOC. Therefore, a combination of automatic chamber (AC) measurements of CO2 exchange and empirically modeled aboveground biomass development (NPPshoot) was used. To verify our method, results were compared with ΔSOC observed by soil resampling. AC measurements were performed from 2010 to 2014 under a silage maize/winter fodder rye/sorghum-Sudan grass hybrid/alfalfa crop rotation at a colluvial depression located in the hummocky ground moraine landscape of NE Germany. Widespread in large areas of the formerly glaciated Northern Hemisphere, this depression type is characterized by a variable groundwater level (GWL) and pronounced small-scale spatial heterogeneity in soil properties, such as SOC and nitrogen (Nt). After monitoring the initial stage during 2010, soil erosion was experimentally simulated by incorporating topsoil material from an eroded midslope soil into the plough layer of the colluvial depression. SOC stocks were quantified before and after soil manipulation and at the end

  13. Detecting small-scale spatial heterogeneity and temporal dynamics of soil organic carbon (SOC) stocks: a comparison between automatic chamber-derived C budgets and repeated soil inventories

    Hoffmann, Mathias; Jurisch, Nicole; Garcia Alba, Juana; Albiac Borraz, Elisa; Schmidt, Marten; Huth, Vytas; Rogasik, Helmut; Rieckh, Helene; Verch, Gernot; Sommer, Michael; Augustin, Jürgen

    2017-03-01

    Carbon (C) sequestration in soils plays a key role in the global C cycle. It is therefore crucial to adequately monitor dynamics in soil organic carbon (ΔSOC) stocks when aiming to reveal underlying processes and potential drivers. However, small-scale spatial (10-30 m) and temporal changes in SOC stocks, particularly pronounced in arable lands, are hard to assess. The main reasons for this are limitations of the well-established methods. On the one hand, repeated soil inventories, often used in long-term field trials, reveal spatial patterns and trends in ΔSOC but require a longer observation period and a sufficient number of repetitions. On the other hand, eddy covariance measurements of C fluxes towards a complete C budget of the soil-plant-atmosphere system may help to obtain temporal ΔSOC patterns but lack small-scale spatial resolution. To overcome these limitations, this study presents a reliable method to detect both short-term temporal dynamics as well as small-scale spatial differences of ΔSOC using measurements of the net ecosystem carbon balance (NECB) as a proxy. To estimate the NECB, a combination of automatic chamber (AC) measurements of CO2 exchange and empirically modeled aboveground biomass development (NPPshoot) were used. To verify our method, results were compared with ΔSOC observed by soil resampling. Soil resampling and AC measurements were performed from 2010 to 2014 at a colluvial depression located in the hummocky ground moraine landscape of northeastern Germany. The measurement site is characterized by a variable groundwater level (GWL) and pronounced small-scale spatial heterogeneity regarding SOC and nitrogen (Nt) stocks. Tendencies and magnitude of ΔSOC values derived by AC measurements and repeated soil inventories corresponded well. The period of maximum plant growth was identified as being most important for the development of spatial differences in annual ΔSOC. Hence, we were able to confirm that AC-based C budgets are able

  14. Long-term spatial and temporal microbial community dynamics in a large-scale drinking water distribution system with multiple disinfectant regimes.

    Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus

    2018-08-01

    Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However

  15. Challenges and opportunities for large landscape-scale management in a shifting climate: The importance of nested adaptation responses across geospatial and temporal scales

    Gary M. Tabor; Anne Carlson; Travis Belote

    2014-01-01

    The Yellowstone to Yukon Conservation Initiative (Y2Y) was established over 20 years ago as an experiment in large landscape conservation. Initially, Y2Y emerged as a response to large scale habitat fragmentation by advancing ecological connectivity. It also laid the foundation for large scale multi-stakeholder conservation collaboration with almost 200 non-...

  16. Bioremediation of endosulfan contaminated soil and water-Optimization of operating conditions in laboratory scale reactors

    Kumar, Mathava; Philip, Ligy

    2006-01-01

    A mixed bacterial culture consisted of Staphylococcus sp., Bacillus circulans-I and -II has been enriched from contaminated soil collected from the vicinity of an endosulfan processing industry. The degradation of endosulfan by mixed bacterial culture was studied in aerobic and facultative anaerobic conditions via batch experiments with an initial endosulfan concentration of 50 mg/L. After 3 weeks of incubation, mixed bacterial culture was able to degrade 71.58 ± 0.2% and 75.88 ± 0.2% of endosulfan in aerobic and facultative anaerobic conditions, respectively. The addition of external carbon (dextrose) increased the endosulfan degradation in both the conditions. The optimal dextrose concentration and inoculum size was estimated as 1 g/L and 75 mg/L, respectively. The pH of the system has significant effect on endosulfan degradation. The degradation of alpha endosulfan was more compared to beta endosulfan in all the experiments. Endosulfan biodegradation in soil was evaluated by miniature and bench scale soil reactors. The soils used for the biodegradation experiments were identified as clayey soil (CL, lean clay with sand), red soil (GM, silty gravel with sand), sandy soil (SM, silty sand with gravel) and composted soil (PT, peat) as per ASTM (American society for testing and materials) standards. Endosulfan degradation efficiency in miniature soil reactors were in the order of sandy soil followed by red soil, composted soil and clayey soil in both aerobic and anaerobic conditions. In bench scale soil reactors, endosulfan degradation was observed more in the bottom layers. After 4 weeks, maximum endosulfan degradation efficiency of 95.48 ± 0.17% was observed in red soil reactor where as in composted soil-I (moisture 38 ± 1%) and composted soil-II (moisture 45 ± 1%) it was 96.03 ± 0.23% and 94.84 ± 0.19%, respectively. The high moisture content in compost soil reactor-II increased the endosulfan concentration in the leachate. Known intermediate metabolites of

  17. Long-term temporal stability of the National Institute of Standards and Technology spectral irradiance scale determined with absolute filter radiometers

    Yoon, Howard W.; Gibson, Charles E.

    2002-01-01

    The temporal stability of the National Institute of Standards and Technology (NIST) spectral irradiance scale as measured with broadband filter radiometers calibrated for absolute spectral irradiance responsivity is described. The working standard free-electron laser (FEL) lamps and the check standard FEL lamps have been monitored with radiometers in the ultraviolet and the visible wavelength regions. The measurements made with these two radiometers reveal that the NIST spectral irradiance scale as compared with an absolute thermodynamic scale has not changed by more than 1.5% in the visible from 1993 to 1999. Similar measurements in the ultraviolet reveal that the corresponding change is less than 1.5% from 1995 to 1999. Furthermore, a check of the spectral irradiance scale by six different filter radiometers calibrated for absolute spectral irradiance responsivity based on the high-accuracy cryogenic radiometer shows that the agreement between the present scale and the detector-based scale is better than 1.3% throughout the visible to the near-infrared wavelength region. These results validate the assigned spectral irradiance of the widely disseminated NIST or NIST-traceable standard sources

  18. Optimization of compositions of multicomponent fine-grained fiber concretes modified at different scale levels.

    NIZINA Tatyana Anatolevna,

    2017-04-01

    Full Text Available The paper deals with perspectives of modification of cement composites at different scale levels (nano-, micro-, macro-. Main types of micro- and nanomodifiers used in modern concrete technology are presented. Advantages of fullerene particles applied in nanomodification of cement concretes have been shown. Use of complex modifiers based on dispersed fibers, mineral additives and nanoparticles is proposed. These are the basic components of the fiber fine-grained concretes: cement of class CEM I 42,5R produced by JSC «Mordovcement», river sand of Novostepanovskogo quarry (Smolny settlement, Ichalkovsky district, Republic of Mordovia, densified condensed microsilica (DCM-85 produced by JSC «Kuznetskie Ferrosplavy» (Novokuznetsk, highly active metakaolin white produced by LLC «D-Meta» (Dneprodzerzhinsk, waterproofing additive in concrete mix «Penetron Admix» produced by LLC «Waterproofing materials plant «Penetron» (Ekaterinburg, polycarboxylate superplasticizer Melflux 1641 F (Construction Polymers BASF, Germany. Dispersed reinforcement of concretes was provided by injection of the fibers of three types: polypropylene multifilament fiber with cutting length of 12 mm, polyacrylonitrile synthetic fiber FibARM Fiber WВ with cutting length of 12 mm and basalt microfiber «Astroflex-MBM» modified by astralene with length about 100÷500 microns. Analysis of results of the study focused on saturated D-optimal plan was carried out by polynomial models «mixture I, mixture II, technology – properties» that considers the impact of six variable factors. Optimum fields of variation of fine-grained modified fiber concrete components have been identified by the method of experimental-statistical modeling. Polygons of distribution levels of factors of modified cement fiber concretes are constructed, that allowed tracing changes in fields of tensile in compressive strength and tensile strength in bending at age of 28 days depending on target

  19. Clinical utility of the Wechsler memory scale - fourth edition (WMS-IV) in patients with intractable temporal lobe epilepsy

    Bouman, Zita; Elhorst, Didi; Hendriks, Marc P H; Kessels, Roy P C; Aldenkamp, Albert P.

    2016-01-01

    Introduction: The Wechsler Memory Scale (WMS) is one of the most widely used test batteries to assess memory functions in patients with brain dysfunctions of different etiologies. This study examined the clinical validation of the Dutch Wechsler Memory Scale - Fourth Edition (WMS-IV-NL) in patients

  20. Clinical utility of the Wechsler Memory Scale - Fourth Edition (WMS-IV) in patients with intractable temporal lobe epilepsy

    Bouman, Z.; Elhorst, D.; Hendriks, M.P.H.; Kessels, R.P.C.; Aldenkamp, A.P.

    2016-01-01

    Introduction: The Wechsler Memory Scale (WMS) is one of the most widely used test batteries to assess memory functions in patients with brain dysfunctions of different etiologies. This study examined the clinical validation of the Dutch Wechsler Memory Scale-Fourth Edition (WMS-IV-NL) in patients

  1. Temporal networks

    Holme, Petter; Saramäki, Jari

    2012-10-01

    A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered

  2. Spatio-temporal modelling of atmospheric pollution based on observations provided by an air quality monitoring network at a regional scale

    Coman, A.

    2008-01-01

    This study is devoted to the spatio-temporal modelling of air pollution at a regional scale using a set of statistical methods in order to treat the measurements of pollutant concentrations (NO 2 , O 3 ) provided by an air quality monitoring network (AIRPARIF). The main objective is the improvement of the pollutant fields mapping using either interpolation methods based on the spatial or spatio-temporal structure of the data (spatial or spatio-temporal kriging) or some algorithms taking into account the observations, in order to correct the concentrations simulated by a deterministic model (Ensemble Kalman Filter). The results show that nitrogen dioxide mapping based only on spatial interpolation (kriging) gives the best results, while the spatial repartition of the monitoring sites is good. For the ozone mapping it is the sequential data assimilation that leads us to a better reconstruction of the plume's form and position for the analyzed cases. Complementary to the pollutant mapping, another objective was to perform a local prediction of ozone concentrations on a 24-hour horizon; this task was performed using Artificial Neural Networks. The performance indices obtained using two types of neural architectures indicate a fair accuracy especially for the first 8 hours of prediction horizon. (author)

  3. Optimal Focusing and Scaling Law for Uniform Photo-Polymerization in a Thick Medium Using a Focused UV Laser

    Jui-Teng Lin

    2014-02-01

    Full Text Available We present a modeling study of photoinitiated polymerization in a thick polymer-absorbing medium using a focused UV laser. Transient profiles of the initiator concentration at various focusing conditions are analyzed to define the polymerization boundary. Furthermore, we demonstrate the optimal focusing conditions that yield more uniform polymerization over a larger volume than the collimated or non-optimal cases. Too much focusing with the focal length f < f* (an optimal focal length yields a fast process; however, it provides a smaller polymerization volume at a given time than in the optimal focusing case. Finally, a scaling law is derived and shows that f* is inverse proportional to the product of the extinction coefficient and the initiator initial concentration. The scaling law provides useful guidance for the prediction of optimal conditions for photoinitiated polymerization under a focused UV laser irradiation. The focusing technique also provides a novel and unique means for obtaining uniform photo-polymerization within a limited irradiation time.

  4. Growth Optimization of Lactobacillus plantarum T5jq301796.1, an Iranian Indigenous Probiotic in Lab Scale Fermenter

    Faranak Noori

    2016-07-01

    Full Text Available Background and Objective: Lactobacillus plantarum is one of the probiotics species used in functional food products. These bacteria or their purified bacteriocins are used as biological preservatives in the food industry. The first step in production of an array of probiotic products is optimizing production in fermentors. This study aimed to examine factors affecting the in vitro growth optimization of Lactobacillus plantarum T5JQ301796.1 in a lab scale fermentor.Materials and Methods: Following 24 hours of anaerobic culture of the lactobacillus at 37°C, the pre-culture was ready and was inoculated to a 5 liter fermentor at 37°C and stirred at 40 rpm. Then factors affecting lactobacillus growth including carbon and nitrogen sources and pH were studied. The results were interpreted using response surface methodology (RSM, and optimal conditions for the equipment were determined.Results and Conclusion: For optimal growth of Lactobacillus plantarum T5JQ301796.1 in lab scale fermentor, the optimal conditions were 25.96 gl-1 of glucose, 1.82% of yeast extract, pH of 7.26, and stirring at 40 rpm at optimum temperature between 37-40°C. In this condition, maximum viable cell in the batch fermentation was 1.25×1010 CFU ml-1. Application of central composite design for the growth optimization of this bacterium led to maximum viable cells equal to 1.25×1010 CFU ml-1. So the mentioned features can lead to optimum industrial scale production and usage of this probiotic strain in probiotic products.Conflict of interest: The authors declare that there is no conflict of interest.

  5. [Temporal and spatial variation of the optimal sowing dates of summer maize based on both statistical and processes models in Henan Province, China].

    Tan, Mei-xiu; Wang, Jing; Yu, Wei-dong; He, Di; Wang, Na; Dai, Tong; Sun, Yan; Tang, Jian-zhao; Chang, Qing

    2015-12-01

    Sowing date is one of the vital factors for determining crop yield. In this study, temporal and spatial variation of optimal sowing date of summer maize was analyzed by statistical model and the APSIM-Maize model in Henan Province, China. The results showed that average summer maize optimal sowing dates ranged from May 30 to June 13 across Henan Province with earlier sowing before June 8 in the southern part and later sowing from June 4 to June 13 in the northern part. The optimal sowing date in mountain area of western Henan Province should be around May 30. Late-maturing variety Nongda 108 should be planted at least two days earlier than middle-maturing variety Danyu 13. Under climate warming background, maize sowing should be postponed for at least 3 days if maize harvesting date could be delayed for a week. It was proposed that sowing should be delayed for about a week for a yearly less precipitation pattern while advanced for about a week for a yearly more precipitation pattern compared to the normal one. Across Henan Province, the optimal sowing dates of summer maize showed no significant change trend in 1971-2010, while the potential sowing period had been extended for some regions, such as south from Zhumadian, Yichuan, Nei-xiang and Nanyang in the middle part of Henan, Linzhou in the northern Henan and Sanmenxia in the western Henan, as a result from advanced maturity of winter wheat due to increasing temperature and winter wheat cultivar change. Optimal sowing dates at 76.7% of the study stations showed no significant difference between the two methods. It was recommended that the northern Henan should sow maize immediately after any rainfall and replant afterward, while the southern Henan should not sow maize until that there were valid precipitation (3.9 mm and 8.3 mm for upper south and south parts, respectively) during sowing period, both required enough precipitation during key water requirement period and optimal temperature during grain

  6. Assessing the economic value of co-optimized grid-scale energy storage investments in supporting high renewable portfolio standards

    Go, Roderick S.; Munoz, Francisco D.; Watson, Jean-Paul

    2016-01-01

    Highlights: • We present a MILP to co-optimize generation, transmission, and storage investments. • We find significant value in co-optimized storage via investment deferrals. • Operational savings from bulk services are small relative to investment deferrals. • Co-optimized energy storage significantly reduces prices associated with RPS. - Abstract: Worldwide, environmental regulations such as Renewable Portfolio Standards (RPSs) are being broadly adopted to promote renewable energy investments. With corresponding increases in renewable energy deployments, there is growing interest in grid-scale energy storage systems (ESS) to provide the flexibility needed to efficiently deliver renewable power to consumers. Our contribution in this paper is to introduce a unified generation, transmission, and bulk ESS expansion planning model subject to an RPS constraint, formulated as a two-stage stochastic mixed-integer linear program (MILP) optimization model, which we then use to study the impact of co-optimization and evaluate the economic interaction between investments in these three asset classes in achieving high renewable penetrations. We present numerical case studies using the 24-bus IEEE RTS-96 test system considering wind and solar as available renewable energy resources, and demonstrate that up to $180 million/yr in total cost savings can result from the co-optimization of all three assets, relative to a situation in which no ESS investment options are available. Surprisingly, we find that co-optimized bulk ESS investments provide significant economic value through investment deferrals in transmission and generation capacity, but very little savings in operational cost. Finally, we observe that planning transmission and generation infrastructure first and later optimizing ESS investments—as is common in industry—captures at most 1.7% ($3 million/yr) of the savings that result from co-optimizing all assets simultaneously.

  7. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  8. Multiple time-scale optimization scheduling for islanded microgrids including PV, wind turbine, diesel generator and batteries

    Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.

    2017-01-01

    A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...

  9. Historical Roots of the Spatial, Temporal, and Diversity Scales of Agricultural Decision-Making in Sierra de Santa Marta, Los Tuxtlas

    Negrete-Yankelevich, Simoneta; Porter-Bolland, Luciana; Blanco-Rosas, José Luis; Barois, Isabelle

    2013-07-01

    Land degradation is a serious problem in tropical mountainous areas. Market prices, technological development, and population growth are often invoked as the prime causes. Using historical agrarian documents, literature sources, and historical population data, we (1) provide quantitative and qualitative evidence that the land degradation present at Sierra de Santa Marta (Los Tuxtlas, Mexico) has involved a historical reduction in the temporal, spatial, and diversity scales, in which individual farmers make management decisions, and has resulted in decreased maize productivity; and (2) analyze how these three scalar changes can be linked to policy, population growth, and agrarian history. We conclude that the historical reduction in the scales of land use decision-making and practices constitutes a present threat to indigenous agricultural heritage. The long-term viability of agriculture requires that initiatives consider incentives for co-responsibility with an initial focus on self-sufficiency.

  10. Intelligent Network Flow Optimization (INFLO) prototype : Seattle small-scale demonstration report.

    2015-05-01

    This report describes the performance and results of the INFLO Prototype Small-Scale Demonstration. The purpose of : the Small-Scale Demonstration was to deploy the INFLO Prototype System to demonstrate its functionality and : performance in an opera...

  11. A primal-dual interior point method for large-scale free material optimization

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  12. WaveSAX device: design optimization through scale modelling and a PTO strategical control system

    Peviani, Maximo; Danelli, Andrea; Dadone, Gianluca; Dalmasso, Alberto

    2017-04-01

    WaveSAX is an innovative OWC (Oscillating Water Column) device for the generation of electricity from wave power, conceived to be installed in coastal marine structures, such as ports and harbours. The device - especially designed for the typical wave climate of Mediterranean Sea - is characterized by two important aspects: flexibility to fit in different structural configurations and replication in a large number of units. A model of the WaveSAX device on a scale 1:5 has been built and tested in the ocean tank at Ecole Centrale de Nantes (France). The study aimed to analyse the behaviour of the device, including two Wells turbine configurations (with three and four blades), with regular and irregular wave conditions in the ocean wave tank. The model and the wave basin were equipped with a series of sensors which allowed to measure the following parameters during the tests: pressure in different points inside the device, the free water surface displacement inside and outside the device, the rotational velocity and the torque at the top of the axis. The tests had the objective to optimize the device design, especially as far as the characteristics of the rotor of the turbine is concern. Although the performance of the WaveSAX has been satisfactory for regular wave conditions, the behaviour of the Wells turbines for irregular wave climate has shown limitations in terms of maintaining the capacity to transform hydraulics energy into mechanical power. To optimize the efficiency of the turbine, an electronical system has been built on the basis of the ocean tank tests. It allows to continuously monitor and command the rotational speed and the torque of the rotor connected with the turbine, and to control in real time the electrical flow of a motor-generator, either absorbing energy as a generator, or providing power to the turbine working as an engine. Two strategies - based on the velocity and the torque control - have been investigate in the electronic test bench

  13. Enabling Global Observations of Clouds and Precipitation on Fine Spatio-Temporal Scales from CubeSat Constellations: Temporal Experiment for Storms and Tropical Systems Technology Demonstration (TEMPEST-D)

    Reising, S. C.; Todd, G.; Padmanabhan, S.; Lim, B.; Heneghan, C.; Kummerow, C.; Chandra, C. V.; Berg, W. K.; Brown, S. T.; Pallas, M.; Radhakrishnan, C.

    2017-12-01

    The Temporal Experiment for Storms and Tropical Systems (TEMPEST) mission concept consists of a constellation of 5 identical 6U-Class satellites observing storms at 5 millimeter-wave frequencies with 5-10 minute temporal sampling to observe the time evolution of clouds and their transition to precipitation. Such a small satellite mission would enable the first global measurements of clouds and precipitation on the time scale of tens of minutes and the corresponding spatial scale of a few km. TEMPEST is designed to improve the understanding of cloud processes by providing critical information on temporal signatures of precipitation and helping to constrain one of the largest sources of uncertainty in cloud models. TEMPEST millimeter-wave radiometers are able to perform remote observations of the cloud interior to observe microphysical changes as the cloud begins to precipitate or ice accumulates inside the storm. The TEMPEST technology demonstration (TEMPEST-D) mission is in progress to raise the TRL of the instrument and spacecraft systems from 6 to 9 as well as to demonstrate radiometer measurement and differential drag capabilities required to deploy a constellation of 6U-Class satellites in a single orbital plane. The TEMPEST-D millimeter-wave radiometer instrument provides observations at 89, 165, 176, 180 and 182 GHz using a single compact instrument designed for 6U-Class satellites. The direct-detection topology of the radiometer receiver substantially reduces both its power consumption and design complexity compared to heterodyne receivers. The TEMPEST-D instrument performs precise, end-to-end calibration using a cross-track scanning reflector to view an ambient blackbody calibration target and cosmic microwave background every scan period. The TEMPEST-D radiometer instrument has been fabricated and successfully tested under environmental conditions (vibration, thermal cycling and vacuum) expected in low-Earth orbit. TEMPEST-D began in Aug. 2015, with a

  14. Data assimilation in optimizing and integrating soil and water quality water model predictions at different scales

    Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...

  15. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  16. Spatial and temporal benthic species assemblage responses with a deployed marine tidal energy device: a small scaled study.

    Broadhurst, Melanie; Orme, C David L

    2014-08-01

    The addition of man-made structures to the marine environment is known to increase the physical complexity of the seafloor, which can influence benthic species community patterns and habitat structure. However, knowledge of how deployed tidal energy device structures influence benthic communities is currently lacking. Here we examined species biodiversity, composition and habitat type surrounding a tidal energy device within the European Marine Energy Centre test site, Orkney. Commercial fishing and towed video camera techniques were used over three temporal periods, from 2009 to 2010. Our results showed increased species biodiversity and compositional differences within the device site, compared to a control site. Both sites largely comprised of crustacean species, omnivore or predatory feeding regimes and marine tide-swept EUNIS habitat types, which varied over the time. We conclude that the device could act as a localised artificial reef structure, but that further in-depth investigations are required. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Three dimensional optimization of small-scale axial turbine for low temperature heat source driven organic Rankine cycle

    Al Jubori, Ayad; Al-Dadah, Raya K.; Mahmoud, Saad; Bahr Ennil, A.S.; Rahbar, Kiyarash

    2017-01-01

    Highlights: • Three-dimensional optimization of axial turbine stage is presented. • Six organic fluids suitable for low-temperature heat source are considered. • Three-dimensional optimization has been done for each working fluid. • The results showed highlight the potential of optimization technique. • The performance of optimized turbine has been improved off-design conditions. - Abstract: Advances in optimization techniques can be used to enhance the performance of turbines in various applications. However, limited work has been reported on using such optimization techniques to develop small-scale turbines for organic Rankine cycles. This paper investigates the use of multi-objective genetic algorithm to optimize the stage geometry of a small-axial subsonic turbine. This optimization is integrated with organic Rankine cycle analysis using wide range of high density organic working fluids like R123, R134a, R141b, R152a, R245fa and isobutane suitable for low temperature heat sources <100 °C such as solar energy to achieve the best turbine design and highest organic Rankine cycle efficiency. The isentropic efficiency of the turbine in most of the reported organic Rankine cycle studies was assumed constant, while the current work allows the turbine isentropic efficiency to change (dynamic value) with both operating conditions and working fluids. Three-dimensional computational fluid dynamics analysis and multi-objective genetic algorithm optimization were performed using three-dimensional Reynolds-averaged Navier-Stokes equations with k-omega shear stress transport turbulence model in ANSYS"R"1"7-CFX and design exploration for various working fluids. The optimization was carried out using eight design parameters for the turbine stage geometry optimization including stator and rotor number of blades, rotor leading edge beta angle, trailing edge beta angle, stagger angle, throat width, trailing half wedge angle and shroud tip clearance. Results showed that

  18. Temporal stability of the Francis Scale of Attitude toward Christianity short-form: test-retest data over one week.

    Lewis, Christopher Alan; Cruise, Sharon Mary; McGuckin, Conor

    2005-04-01

    This study evaluated the test-retest reliability of the Francis Scale of Attitude toward Christianity short-form. 39 Northern Irish undergraduate students completed the measure on two occasions separated by one week. Stability across the two administrations was high, r = .92, and there was no significant change between Time 1(M = 25.2, SD = 5.4) and Time 2 (M = 25.7, SD = 6.2). These data support the short-term test-retest reliability of the Francis Scale of Attitude toward Christianity short-form.

  19. Mathematical methods in material science and large scale optimization workshops: Final report, June 1, 1995-November 30, 1996

    Friedman, A. [Minnesota Univ., Minneapolis, MN (United States). Inst. for Mathematics and Its Applications

    1996-12-01

    The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.

  20. Scaling up for the industrial production of rifamycin B; optimization of the process conditions in bench-scale fermentor

    Hewaida F. El-Sedawy

    2013-06-01

    Full Text Available Optimization of fermentation process conditions using a gene amplified variant of Amycolatopsis mediterranei (NCH was carried out. The use of aeration level 1.5 vvm increased the yield by 16.6% (from 13.81 to 16.1 g/l upon controlling the temperature at 28 °C. Adjustment of the aeration level at 1.5 vvm for 3 days then controlling the dissolved oxygen (DO at 30% saturation further increased the yield to 17.8 g/l. The optimum pH was 6.5 for 3 days then 7 thereafter when a production yield of 16.1 g/l was recorded using an aeration rate of 1.5 vvm. Controlling the pH at constant value (6.5 or 7 all through the fermentation process decreased the yield by 5–21%. Controlling the temperature at 30 °C for 3 days then 28 °C thereafter slightly increased the yield by 5% upon using an aeration rate of 1 vvm while it decreased upon using an aeration rate of 1.5 vvm. Integration of the most optimum conditions increased the production yield by 22% from 13.81 to 17.8 g/l.

  1. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2015-01-01

    operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open

  2. Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion (extended abstract)

    Spaan, M.T.J.; Oliehoek, F.A.; Amato, C.

    2011-01-01

    We advance the state of the art in optimal solving of decentralized partially observable Markov decision processes (Dec-POMDPs), which provide a formal model for multiagent planning under uncertainty.

  3. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Marine ecosystem acoustics (MEA): Quantifying processes in the sea at the spatio-temporal scales on which they occur

    Godø l, Olav Rune; Handegard, Nils Olav; Browman, Howard I.; MacAulay, Gavin J.; Kaartvedt, Stein; Giske, Jarl; Ona, Egil; Huse, Geir; Johnsen, Espen

    2014-01-01

    information by taxon at the relevant scales. The gaps between single-species assessment and ecosystem-based management, as well as between fisheries oceanography and ecology, are thereby bridged. The MEA concept combines state-of-the-art acoustic technology

  5. Temporal and Spatial Variation in Peatland Carbon Cycling and Implications for Interpreting Responses of an Ecosystem-Scale Warming Experiment

    Natalie A. Griffiths; Paul J. Hanson; Daniel M. Ricciuto; Colleen M. Iversen; Anna M. Jensen; Avni Malhotra; Karis J. McFarlane; Richard J. Norby; Khachik Sargsyan; Stephen D. Sebestyen; Xiaoying Shi; Anthony P. Walker; Eric J. Ward; Jeffrey M. Warren; David J. Weston

    2017-01-01

    We are conducting a large-scale, long-term climate change response experiment in an ombrotrophic peat bog in Minnesota to evaluate the effects of warming and elevated CO2 on ecosystem processes using empirical and modeling approaches. To better frame future assessments of peatland responses to climate change, we characterized and compared spatial...

  6. Temporal stability of the Dutch version of the Wechsler Memory Scale - Fourth Edition (WMS-IV-NL)

    Bouman, Z.; Hendriks, M.P.H.; Aldenkamp, A.P.; Kessels, R.P.C.

    2015-01-01

    Objective: The Wechsler Memory Scale - Fourth Edition (WMS-IV) is one of the most widely used memory batteries. We examined the test–retest reliability, practice effects, and standardized regression-based (SRB) change norms for the Dutch version of the WMS-IV (WMS-IV-NL) after both short and long

  7. Analysing conflicts around small-scale gold mining in the Amazon : The contribution of a multi-temporal model

    Salman, Ton; de Theije, Marjo

    Conflict is small-scale gold mining's middle name. In only a very few situations do mining operations take place without some sort of conflict accompanying the activity, and often various conflicting stakeholders struggle for their interests simultaneously. Analyses of such conflicts are typically

  8. A study of remitted and treatment-resistant depression using MMPI and including pessimism and optimism scales.

    Suzuki, Masatoshi; Takahashi, Michio; Muneoka, Katsumasa; Sato, Koichi; Hashimoto, Kenji; Shirayama, Yukihiko

    2014-01-01

    The psychological aspects of treatment-resistant and remitted depression are not well documented. We administered the Minnesota Multiphasic Personality Inventory (MMPI) to patients with treatment-resistant depression (n = 34), remitted depression (n = 25), acute depression (n = 21), and healthy controls (n = 64). Pessimism and optimism were also evaluated by MMPI. ANOVA and post-hoc tests demonstrated that patients with treatment-resistant and acute depression showed similarly high scores for frequent scale (F), hypochondriasis, depression, conversion hysteria, psychopathic device, paranoia, psychasthenia and schizophrenia on the MMPI compared with normal controls. Patients with treatment-resistant depression, but not acute depression registered high on the scale for cannot say answer. Using Student's t-test, patients with remitted depression registered higher on depression and social introversion scales, compared with normal controls. For pessimism and optimism, patients with treatment-resistant depression demonstrated similar changes to acutely depressed patients. Remitted depression patients showed lower optimism than normal controls by Student's t-test, even though these patients were deemed recovered from depression using HAM-D. The patients with remitted depression and treatment-resistant depression showed subtle alterations on the MMPI, which may explain the hidden psychological features in these cohorts.

  9. A study of remitted and treatment-resistant depression using MMPI and including pessimism and optimism scales.

    Masatoshi Suzuki

    Full Text Available The psychological aspects of treatment-resistant and remitted depression are not well documented.We administered the Minnesota Multiphasic Personality Inventory (MMPI to patients with treatment-resistant depression (n = 34, remitted depression (n = 25, acute depression (n = 21, and healthy controls (n = 64. Pessimism and optimism were also evaluated by MMPI.ANOVA and post-hoc tests demonstrated that patients with treatment-resistant and acute depression showed similarly high scores for frequent scale (F, hypochondriasis, depression, conversion hysteria, psychopathic device, paranoia, psychasthenia and schizophrenia on the MMPI compared with normal controls. Patients with treatment-resistant depression, but not acute depression registered high on the scale for cannot say answer. Using Student's t-test, patients with remitted depression registered higher on depression and social introversion scales, compared with normal controls. For pessimism and optimism, patients with treatment-resistant depression demonstrated similar changes to acutely depressed patients. Remitted depression patients showed lower optimism than normal controls by Student's t-test, even though these patients were deemed recovered from depression using HAM-D.The patients with remitted depression and treatment-resistant depression showed subtle alterations on the MMPI, which may explain the hidden psychological features in these cohorts.

  10. A Study of Remitted and Treatment-Resistant Depression Using MMPI and Including Pessimism and Optimism Scales

    Suzuki, Masatoshi; Takahashi, Michio; Muneoka, Katsumasa; Sato, Koichi; Hashimoto, Kenji; Shirayama, Yukihiko

    2014-01-01

    Background The psychological aspects of treatment-resistant and remitted depression are not well documented. Methods We administered the Minnesota Multiphasic Personality Inventory (MMPI) to patients with treatment-resistant depression (n = 34), remitted depression (n = 25), acute depression (n = 21), and healthy controls (n = 64). Pessimism and optimism were also evaluated by MMPI. Results ANOVA and post-hoc tests demonstrated that patients with treatment-resistant and acute depression showed similarly high scores for frequent scale (F), hypochondriasis, depression, conversion hysteria, psychopathic device, paranoia, psychasthenia and schizophrenia on the MMPI compared with normal controls. Patients with treatment-resistant depression, but not acute depression registered high on the scale for cannot say answer. Using Student's t-test, patients with remitted depression registered higher on depression and social introversion scales, compared with normal controls. For pessimism and optimism, patients with treatment-resistant depression demonstrated similar changes to acutely depressed patients. Remitted depression patients showed lower optimism than normal controls by Student's t-test, even though these patients were deemed recovered from depression using HAM-D. Conclusions The patients with remitted depression and treatment-resistant depression showed subtle alterations on the MMPI, which may explain the hidden psychological features in these cohorts. PMID:25279466

  11. Quantifying small-scale spatio-temporal variability of snow stratigraphy in forests based on high-resolution snow penetrometry

    Teich, M.; Hagenmuller, P.; Bebi, P.; Jenkins, M. J.; Giunta, A. D.; Schneebeli, M.

    2017-12-01

    Snow stratigraphy, the characteristic layering within a seasonal snowpack, has important implications for snow remote sensing, hydrology and avalanches. Forests modify snowpack properties through interception, wind speed reduction, and changes to the energy balance. The lack of snowpack observations in forests limits our ability to understand the evolution of snow stratigraphy and its spatio-temporal variability as a function of forest structure and to observe snowpack response to changes in forest cover. We examined the snowpack under canopies of a spruce forest in the central Rocky Mountains, USA, using the SnowMicroPen (SMP), a high resolution digital penetrometer. Weekly-repeated penetration force measurements were recorded along 10 m transects every 0.3 m in winter 2015 and bi-weekly along 20 m transects every 0.5 m in 2016 in three study plots beneath canopies of undisturbed, bark beetle-disturbed and harvested forest stands, and an open meadow. To disentangle information about layer hardness and depth variabilities, and to quantitatively compare the different SMP profiles, we applied a matching algorithm to our dataset, which combines several profiles by automatically adjusting their layer thicknesses. We linked spatial and temporal variabilities of penetration force and depth, and thus snow stratigraphy to forest and meteorological conditions. Throughout the season, snow stratigraphy was more heterogeneous in undisturbed but also beneath bark beetle-disturbed forests. In contrast, and despite remaining small diameter trees and woody debris, snow stratigraphy was rather homogenous at the harvested plot. As expected, layering at the non-forested plot varied only slightly over the small spatial extent sampled. At the open and harvested plots, persistent crusts and ice lenses were clearly present in the snowpack, while such hard layers barely occurred beneath undisturbed and disturbed canopies. Due to settling, hardness significantly increased with depth at

  12. Scaling Factor Estimation Using an Optimized Mass Change Strategy, Part 1: Theory

    Aenlle, Manuel López; Fernández, Pelayo Fernández; Brincker, Rune

    2007-01-01

    In natural input modal analysis, only un-scaled mode shapes can be obtained. The mass change method is, in many cases, the simplest way to estimate the scaling factors, which involves repeated modal testing after changing the mass in different points of the structure where the mode shapes are known....... The scaling factors are determined using the natural frequencies and mode shapes of both the modified and the unmodified structure. However, the uncertainty on the scaling factor estimation depends on the modal analysis and the mass change strategy (number, magnitude and location of the masses) used to modify...

  13. Longitudinal positron emission tomography imaging of glial cell activation in a mouse model of mesial temporal lobe epilepsy: Toward identification of optimal treatment windows.

    Nguyen, Duc-Loc; Wimberley, Catriona; Truillet, Charles; Jego, Benoit; Caillé, Fabien; Pottier, Géraldine; Boisgard, Raphaël; Buvat, Irène; Bouilleret, Viviane

    2018-06-01

    Mesiotemporal lobe epilepsy is the most common type of drug-resistant partial epilepsy, with a specific history that often begins with status epilepticus due to various neurological insults followed by a silent period. During this period, before the first seizure occurs, a specific lesion develops, described as unilateral hippocampal sclerosis (HS). It is still challenging to determine which drugs, administered at which time point, will be most effective during the formation of this epileptic process. Neuroinflammation plays an important role in pathophysiological mechanisms in epilepsy, and therefore brain inflammation biomarkers such as translocator protein 18 kDa (TSPO) can be potent epilepsy biomarkers. TSPO is associated with reactive astrocytes and microglia. A unilateral intrahippocampal kainate injection mouse model can reproduce the defining features of human temporal lobe epilepsy with unilateral HS and the pattern of chronic pharmacoresistant temporal seizures. We hypothesized that longitudinal imaging using TSPO positron emission tomography (PET) with 18 F-DPA-714 could identify optimal treatment windows in a mouse model during the formation of HS. The model was induced into the right dorsal hippocampus of male C57/Bl6 mice. Micro-PET/computed tomographic scanning was performed before model induction and along the development of the HS at 7 days, 14 days, 1 month, and 6 months. In vitro autoradiography and immunohistofluorescence were performed on additional mice at each time point. TSPO PET uptake reached peak at 7 days and mostly related to microglial activation, whereas after 14 days, reactive astrocytes were shown to be the main cells expressing TSPO, reflected by a continuing increased PET uptake. TSPO-targeted PET is a highly potent longitudinal biomarker of epilepsy and could be of interest to determine the therapeutic windows in epilepsy and to monitor response to treatment. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  14. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  15. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order

  16. The Influence of Large-scale Bank Roughness and Floodplain Composition on Spatial and Temporal Variations in Bank Erosion

    Hackney, C. R.; Darby, S. E.; Leyland, J.; Aalto, R. E.; Best, J.; Parsons, D. R.; Nicholas, A. P.

    2016-12-01

    Knowledge of bank erosion processes and rates along the world's largest rivers remains incomplete, primarily due to the difficulties of obtaining data pertaining to the key driving processes (i.e., during the floods that drive most bank retreat). Recently, larger scale bank roughness elements (slump blocks and embayments) have been shown to impact upon rates and locations of bank erosion. However, a complete understanding of the way such features affect rates of bank erosion is currently hindered by the lack of detailed concurrent observations of slump block geometry, embayment geometry and flow at formative discharges in natural environments. Here, we report on high spatial resolution topographic (Terrestrial Laser Scanner and Multibeam Echo Souder) and flow (Acoustic Doppler Current Profiler) surveys undertaken on the Mekong River, Cambodia, from which we extract the geometric properties of roughness elements across a range of scales. We combine this data with sub-bottom profile data, revealing the composition of the surrounding floodplain, to link, for the first time, scales of bank roughness to bank material composition. Through the categorisation of a series of cut river banks by roughness geometry, we show how rates and locations of bank erosion are dependent on that roughness and associated bank material changes. We test how observed patterns of bank erosion conform to previously detailed models of embayment development, and provide new insight into processes affecting the retreat of large river banks.

  17. SU-D-BRB-06: Treating Glioblastoma Multiforme (GBM) as a Chronic Disease: Implication of Temporal-Spatial Dose Fractionation Optimization Including Cancer Stem Cell Dynamics

    Yu, V; Nguyen, D; Pajonk, F; Kaprealian, T; Kupelian, P; Steinberg, M; Low, D; Sheng, K [Department of Radiation Oncology, UCLA, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To explore the feasibility of improving GBM treatment outcome with temporal-spatial dose optimization of an ordinary differential equation (ODE) that models the differentiation and distinct radiosensitivity between cancer stem cells (CSC) and differentiated cancer cells (DCC). Methods: The ODE was formulated into a non-convex optimization problem with the objective to minimize remaining total cancer cells 500 days from the onset of radiotherapy when the total cancer cell number was 3.5×10{sup 7}, while maintaining normal tissue biological effective dose (BED) of 100Gy resulted from standard prescription of 2Gyx30. Assuming spatially separated CSC and DCC, optimization was also performed to explore the potential benefit from dose-painting the two compartments. Dose escalation to a sub-cell-population in the GTV was also examined assuming that a 2 cm margin around the GTV allows sufficient dose drop-off to 100Gy BED. The recurrence time was determined as the time at which the total cancer cell number regrows to 10{sup 9} cells. Results: The recurrence time with variable fractional doses administered once per week, bi-week and month for one year were found to be 615, 593 and 570 days, superior to the standard-prescription recurrence time of 418 days. The optimal dose-fraction size progression for both uniform and dose-painting to the tumor is low and relatively constant in the beginning and gradually increases to more aggressive fractions at end of the treatment course. Dose escalation to BED of 200Gy to the whole tumor alongside with protracted weekly treatment was found to further delay recurrence to 733 days. Dose-painting of 200 and 500Gy BED to CSC on a year-long weekly schedule further extended recurrence to 736 and 1076 days, respectively. Conclusion: GBM treatment outcome can possibly be improved with a chronic treatment approach. Further dose escalation to the entire tumor or CSC targeted killing is needed to achieve total tumor control. This work

  18. Investing the temporal and spatial scale variations of luminescence sensitivity of loess in the Chinese Loess Plateau since the last interglacial

    Lv, T.; Sun, J.; Gong, Z.

    2017-12-01

    The provenance of the eolian deposits on the Loess Plateau has long been one of the most important issues. Although the luminescence sensitivity of the quartz grains of desert sands has been used in tracing provenance, and the vertical variation of OSL sensitivity of Loess in the central Chinese Loess plateau (CLP) has been studied, it still remains uncertain about the temporal and spatial scale variations of luminescence sensitivity of loess. This paper chose the eolian deposits of Shimao (SM) section in the northern margin of the Chinese Loess Plateau and of Luochuan (LC) section in the central Chinese Loess Plateau. Firstly, the temporal scale variations of luminescence sensitivity of different quartz grians (38-64 and 64-90μm) from sand/loess of SM section have been studied respectively. Our results indicate that they both have similar trend in the strength of luminescence sensitivity, characterized by lower values in sand/loess beds and higher values in soils. The OSL sensitivity of quartz grains of the sand-loess-soil sequence shows very similar trend to the magnetic susceptibility fluctuations. Secondly, the spatial scale variations of luminescence sensitivity of loess in the Chinese Loess Plateau since the last interglacial were studied by comparing the values of SM section and LC section. The OSL sensitivity of quartz grains from the two sections since the last interglacial change synchronously. However, the OSL sensitivity values of quartz grains from the same loess/paleosol beds of LC section are higher than these values of SM section. We suggest that the temporal variation of OSL sensitivity of SM is main influenced by the retreat-advance of deserts. The spatial variation of OSL sensitivity mainly is due to the different sedimentary history, containing of repeated erosion, transport and deposition cycles which controlled by cyclic climatic change. The higher OSL sensitivity values of quartz grains in LC section relates of longer transport distance and

  19. Scale up, optimization and stability analysis of Curcumin C3 complex-loaded nanoparticles for cancer therapy

    2012-01-01

    Background Nanoparticle based delivery of anticancer drugs have been widely investigated. However, a very important process for Research & Development in any pharmaceutical industry is scaling nanoparticle formulation techniques so as to produce large batches for preclinical and clinical trials. This process is not only critical but also difficult as it involves various formulation parameters to be modulated all in the same process. Methods In our present study, we formulated curcumin loaded poly (lactic acid-co-glycolic acid) nanoparticles (PLGA-CURC). This improved the bioavailability of curcumin, a potent natural anticancer drug, making it suitable for cancer therapy. Post formulation, we optimized our process by Reponse Surface Methodology (RSM) using Central Composite Design (CCD) and scaled up the formulation process in four stages with final scale-up process yielding 5 g of curcumin loaded nanoparticles within the laboratory setup. The nanoparticles formed after scale-up process were characterized for particle size, drug loading and encapsulation efficiency, surface morphology, in vitro release kinetics and pharmacokinetics. Stability analysis and gamma sterilization were also carried out. Results Results revealed that that process scale-up is being mastered for elaboration to 5 g level. The mean nanoparticle size of the scaled up batch was found to be 158.5 ± 9.8 nm and the drug loading was determined to be 10.32 ± 1.4%. The in vitro release study illustrated a slow sustained release corresponding to 75% drug over a period of 10 days. The pharmacokinetic profile of PLGA-CURC in rats following i.v. administration showed two compartmental model with the area under the curve (AUC0-∞) being 6.139 mg/L h. Gamma sterilization showed no significant change in the particle size or drug loading of the nanoparticles. Stability analysis revealed long term physiochemical stability of the PLGA-CURC formulation. Conclusions A successful effort towards

  20. Optimal power and performance trade-offs for dynamic voltage scaling in power management based wireless sensor node

    Anuradha Pughat

    2016-09-01

    Full Text Available Dynamic voltage scaling contributes to a significant amount of power saving, especially in the energy constrained wireless sensor networks (WSNs. Existing dynamic voltage scaling techniques make the system slower and ignore the event miss rate. This results in degradation of the system performance when there is non-stationary workload at input. The overhead due to transition between voltage level and discrete voltage levels are also the limitations of available dynamic voltage scaling (DVS techniques at sensor node (SN. This paper proposes a workload dependent DVS based MSP430 controller model used for SN. An online gradient estimation technique has been used to optimize power and performance trade-offs. The analytical results are validated with the simulation results obtained using simulation tool “SimEvents” and compared with the available AT9OS8535 controller. Based on the stochastic workload, the controller's input voltage, operational frequency, utilization, and average wait time of events are obtained.

  1. Comparing organic farming and land sparing: optimizing yield and butterfly populations at a landscape scale.

    Hodgson, Jenny A; Kunin, William E; Thomas, Chris D; Benton, Tim G; Gabriel, Doreen

    2010-11-01

    Organic farming aims to be wildlife-friendly, but it may not benefit wildlife overall if much greater areas are needed to produce a given quantity of food. We measured the density and species richness of butterflies on organic farms, conventional farms and grassland nature reserves in 16 landscapes. Organic farms supported a higher density of butterflies than conventional farms, but a lower density than reserves. Using our data, we predict the optimal land-use strategy to maintain yield whilst maximizing butterfly abundance under different scenarios. Farming conventionally and sparing land as nature reserves is better for butterflies when the organic yield per hectare falls below 87% of conventional yield. However, if the spared land is simply extra field margins, organic farming is optimal whenever organic yields are over 35% of conventional yields. The optimal balance of land sparing and wildlife-friendly farming to maintain production and biodiversity will differ between landscapes. © 2010 Blackwell Publishing Ltd/CNRS.

  2. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    Hasanov, Khalid

    2014-01-01

    There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.

  3. A realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties

    Sergio Solinas

    2010-05-01

    Full Text Available The way the cerebellar granular layer transforms incoming mossy f