WorldWideScience

Sample records for scale space approach

  1. Object detection with DoG scale-space: a multiple kernel learning approach.

    Science.gov (United States)

    Nilufar, Sharmin; Ray, Nilanjan; Zhang, Hong

    2012-08-01

    Difference of Gaussians (DoG) scale-space for an image is a significant way to generate features for object detection and classification. While applying DoG scale-space features for object detection/classification, we face two inevitable issues: dealing with high dimensional data and selecting/weighting of proper scales. The scale selection process is mostly ad-hoc to date. In this paper, we propose a multiple kernel learning (MKL) method for both DoG scale selection/weighting and dealing with high dimensional scale-space data. We design a novel shift invariant kernel function for DoG scale-space. To select only the useful scales in the DoG scale-space, a novel framework of MKL is also proposed. We utilize a 1-norm support vector machine (SVM) in the MKL optimization problem for sparse weighting of scales from DoG scale-space. The optimized data-dependent kernel accommodates only a few scales that are most discriminatory according to the large margin principle. With a 2-norm SVM this learned kernel is applied to a challenging detection problem in oil sand mining: to detect large lumps in oil sand videos. We tested our method on several challenging oil sand data sets. Our method yields encouraging results on these difficult-to-process images and compares favorably against other popular multiple kernel methods.

  2. Scale Space Hierarchy

    NARCIS (Netherlands)

    Kuijper, Arjan; Florack, L.M.J.; Viergever, M.A.

    2001-01-01

    We investigate the deep structure of a scale space image. We concentrate on scale space critical points - points with vanishing gradient with respect to both spatial and scale direction. We show that these points are always saddle points. They turn out to be extremely useful, since the

  3. Scale relativity and fractal space-time a new approach to unifying relativity and quantum mechanics

    CERN Document Server

    Nottale, Laurent

    2011-01-01

    This book provides a comprehensive survey of the development of the theory of scale relativity and fractal space-time. It suggests an original solution to the disunified nature of the classical-quantum transition in physical systems, enabling the basis of quantum mechanics on the principle of relativity, provided this principle is extended to scale transformations of the reference system. In the framework of such a newly generalized relativity theory (including position, orientation, motion and now scale transformations), the fundamental laws of physics may be given a general form that unifies

  4. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  5. MERIT: A New Approach for A Large Scale Space Infrastructure Based on Resources from Mars

    Science.gov (United States)

    Powell, J.; Maise, G.; Paniagua, J.

    2005-02-01

    A new concept, termed MERIT (Mars-Earth Rapid Interplanetary Transport), would provide very large amounts of supplies in high Earth orbit is described. These supplies would come from Mars rather than from Earth, and would be produced by compact, lightweight robotic MERIT factory units landed on the North Polar Cap of Mars. CO2 and N2 plus water from the surface ice on the Polar Cap would provide the raw materials for the above supplies, using electric and thermal energy from the compact lightweight nuclear power reactor on the MERIT unit. After accumulating a full load of processed supplies, the MERIT unit would lift off from Mars to deliver the supplies to orbiting tankers, and then return to the Polar Cap to produce a new load. Each MERIT unit would have a compact, MITEE nuclear thermal propulsion engine for lift-off and landing on Mars, with a much smaller ΔV than lift-off from Earth. The orbiting tankers would stay in orbit until full, and then transport the supplies to Earth orbit, making a round trip every 2 years. A baseline space infrastructure is described for delivery of 3000 tons of supplies to high Earth orbit per year, including liquid H2 and O2 propellants, water, liquid N2/O2 air, food, and plastics. A total of 14 MERIT factory units, each weighing approximately 4 tons, plus 7 tankers, would be required. A development plan for the MERIT concept is outlined.

  6. SPACE BASED INTERCEPTOR SCALING

    Energy Technology Data Exchange (ETDEWEB)

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  7. Scaling and Stochastic Differential Equations- a statistical approach to quantitative modelling of multiscale processes in space plasmas.

    Science.gov (United States)

    Chapman, S. C.; Hnat, B.; Rowlands, G.; Watkins, N. W.

    A characteristic property of many timeseries in solar system plasmas is that they capture phenomenology that is intrinsically multiscale Insights can often be gained by approaching the data from a statistical rather than event by event point of view and considering measures that test for and quantify scaling properties Here we discuss two sets of observations- in situ plasma measurements of the turbulent solar wind and geomagnetic indices- which show scaling We treat the special case of self affine time series which show statistical intermittency in the sense that the Probability Density Function of the differenced variable is heavy tailed Structure functions can in principle be used to determine the scaling properties of the higher order moments of the PDF but in practice there are statistical limitations presented by a finite length time series We consider a method of conditioning the data that overcomes these to recover the underlying self affine scaling in a finite length time series and test its applicability using an idealized Le vy flight Having determined the scaling exponents from the data we can then derive a Fokker-Planck model along with the associated Langevin equation- a stochastic dynamical equation for the fluctuations in the time series This formalism has connections to understanding these systems in terms of the statistical mechanics of correlated out of equilibrium systems generally

  8. Near-invariance under dynamic scaling for Navier-Stokes equations in critical spaces: a probabilistic approach to regularity problems

    Science.gov (United States)

    Ohkitani, Koji

    2017-01-01

    We make a detailed comparison between the Navier-Stokes equations and their dynamically scaled counterpart, the so-called Leray equations. The Navier-Stokes equations are invariant under static scaling transforms, but are not generally invariant under dynamic scaling transforms. We will study how closely they can be brought together using the critical dependent variables and discuss the implications on the regularity problems. Assuming that the Navier-Stokes equations written in the vector potential have a solution that blows up at t = 1, we derive the Leray equations by dynamic scaling. We observe: (1) the Leray equations have only one term extra on top of those of the Navier-Stokes equations (2) we can recast the Navier-Stokes equations as a Wiener path integral and the Leray equations as another Ornstein-Uhlenbeck path integral. Using the Maruyama-Girsanov theorem, both equations take the identical form modulo the Maruyama-Girsanov density, which is valid up to t=2\\sqrt{2} by the Novikov condition (3) the global solution of the Leray equations is given by a finite-dimensional projection {\\boldsymbol{R}} of a functional of an Ornstein-Uhlenbeck process and a probability measure. If {\\boldsymbol{R}} remains smooth beyond t = 1 under an absolute continuous change of the probability measure, we can rule out finite-time blowup by contradiction. There are two cases: (A) {\\boldsymbol{R}} given by a finite number of Wiener integrals, and (B) otherwise. Ruling out blowup in (A) is straightforward. For (B), a condition based on a limit passage in the Picard iterations is identified for such a contradiction to come out. The whole argument equally holds in {{{R}}}d for any d≥slant 2.

  9. On the Use of Space-Environmental Satellite Data for Global Magnetohydrodynamic Simulations. Time-Scale Initialisation Approach

    Science.gov (United States)

    Lorenzo, Maibys Sierra; Domingues, Margarete Oliveira; Mecías, Angela León; Menconi, Varlei Everton; Mendes, Odim

    2016-12-01

    A global magnetohydrodynamic (MHD) model describes the solar-terrestrial system and the physical processes that live in it. Information obtained from satellites provides input to MHD model to compose a more realistic initial state for the equations and, therefore, more accurate simulations. However, the use of high resolution in time data can produce numerical instabilities that quickly interrupt the simulations. Moreover, satellite time series may have gaps which could be a problem in this context. In order to contribute to the overcoming of such challenges, we propose in this work a methodology based on a variant of the continuous wavelet transform to introduce environmental satellite data on the global resistive MHD model originally developed by Prof. Ogino at the University of Nagoya. Our methodology uses a simplified time-scale version of the original data that preserves the most important spectral features of the phenomena of interest. Then, we can do a long-term integration using this MHD model without any computational instability, while preserving the main time-scale features of the original data set and even overcome possible occurrence of gaps on the satellite data. This methodology also contributes to keeping more realistic physical results.

  10. Scaling of program fitness spaces.

    Science.gov (United States)

    Langdon, W B

    1999-01-01

    We investigate the distribution of fitness of programs concentrating on those represented as parse trees and, particularly, how such distributions scale with respect to changes in the size of the programs. By using a combination of enumeration and Monte Carlo sampling on a large number of problems from three very different areas, we suggest that, in general, once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. We proof this for both linear programs and simple side effect free parse trees. We give the density of solutions to the parity problems in program trees which are composed of XOR building blocks. Limited experiments with programs including side effects and iteration suggest a similar result may also hold for this wider class of programs.

  11. Large-scale film structures in space

    Science.gov (United States)

    Simon, Kirill

    Up-to-date space technology calls for not only taking account of, but also employment of, specific attributes of the outer space environment such as weightlessness, centrifugal forces, hard vacuum, powerful solar radiation. These specific characteristics of outer space allow the use of various structures in space whose development and operation is impossible and inexpedient on Earth. Currently, interest in large-scale space structures is growing; there are various projects on such multi-body space structures and experiments are being conducted for their development. Such designs are represented by spacecraft with solar sails, orbiting solar reflectors, solar energy concentrators, low frequency antennas and others. This paper examines a large-scale flexible space structure made from thin reflective film used as the working surface of the sunlight reflector or the sailcraft. Specifically, this paper deals with techniques of modeling large-scale space structure attitude motion, numerical calculation of vibrations which occur in the system after a spatial slew is performed, as well as optimal trajectory computations. Various methods of the film structure attitude control and stabilization, including optimal slewing programs, are discussed.

  12. European Space Science Scales New Heights

    Science.gov (United States)

    1995-06-01

    about two years' budget and medium-size projects accounting for one years budget. It is on the basis of the Horizon 2000 programme that Europe has: launched the Giotto probe, which successfully encountered Comets Halley (1986) and Grigg-Skjellerup (1992); developed the Hipparcos satellite, whose catalogue of 120 000 stars will be published in late 1996; built the Ulysses probe, which has been exploring the third dimension of the solar system since 1992; and contributed at a rate of 20%to the Hubble Space Telescope programme. It is thanks to Horizon 2000 that Europe is now preparing to launch ISO, Soho and Cluster. It is on the basis of the same long-term plan that Europe will build: Huygens, the probe to be launched in 1997, in co-operation with the United States, to explore the organic planet Titan; XMM, the X-ray telescope scheduled for a launch in 1999; Integral, the gamma-ray observatory due to be launched in 2001 in co-operation with Russia; Rosette, the probe which is to land on Comet Wirtanen in 2012; and FIRST, the submillimetre telescope planned to be in orbit in 2006. After a long and fruitful apprenticeship, European space science therefore now looks set to come into its own. It currently ranks an honourable second place in the world and regularly leads the way in certain specific areas of exploration. Thus Europe is now at the forefront of cometary exploration, fundamental astronomy or "astrometry", solar physics and the physics of interplanetary plasma. So it should also be able to take the lead in infrared astronomy, high- energy astronomy and planetary exploration while continuing to conduct cometary studies with Rosetta. One remarkable fact is that the approach and success of Horizon 2000 have attracted unanimous praise both in and beyond Europe. The programme is being supported by virtually all Europe's scien1ilsts. It is drawing on and inspiring increasing numbers of scientists, including many of the younger generation. Its content and management have

  13. Enabling the 2nd Generation in Space: Building Blocks for Large Scale Space Endeavours

    Science.gov (United States)

    Barnhardt, D.; Garretson, P.; Will, P.

    Today the world operates within a "first generation" space industrial enterprise, i.e. all industry is on Earth, all value from space is from bits (data essentially), and the focus is Earth-centric, with very limited parts of our population and industry participating in space. We are limited in access, manoeuvring, on-orbit servicing, in-space power, in-space manufacturing and assembly. The transition to a "Starship culture" requires the Earth to progress to a "second generation" space industrial base, which implies the need to expand the economic sphere of activity of mankind outside of an Earth-centric zone and into CIS-lunar space and beyond, with an equal ability to tap the indigenous resources in space (energy, location, materials) that will contribute to an expanding space economy. Right now, there is no comfortable place for space applications that are not discovery science, exploration, military, or established earth bound services. For the most part, space applications leave out -- or at least leave nebulous, unconsolidated, and without a critical mass -- programs and development efforts for infrastructure, industrialization, space resources (survey and process maturation), non-traditional and persistent security situational awareness, and global utilities -- all of which, to a far greater extent than a discovery and exploration program, may help determine the elements of a 2nd generation space capability. We propose a focus to seed the pre-competitive research that will enable global industry to develop the necessary competencies that we currently lack to build large scale space structures on-orbit, that in turn would lay the foundation for long duration spacecraft travel (i.e. key technologies in access, manoeuvrability, etc.). This paper will posit a vision-to-reality for a step wise approach to the types of activities the US and global space providers could embark upon to lay the foundation for the 2nd generation of Earth in space.

  14. Partial Differential Equations A unified Hilbert Space Approach

    CERN Document Server

    Picard, Rainer

    2011-01-01

    This book presents a systematic approach to a solution theory for linear partial differential equations developed in a Hilbert space setting based on a Sobolev Lattice structure, a simple extension of the well established notion of a chain (or scale) of Hilbert spaces. Thefocus on a Hilbert space setting is a highly adaptable and suitable approach providing a more transparent framework for presenting the main issues in the development of a solution theory for partial differential equations.This global point of view is takenby focussing on the issues involved in determining the appropriate func

  15. An economical approach to space power systems

    Science.gov (United States)

    Teren, F.

    1978-01-01

    Projected energy demand for all NASA, DoD and civil missions for the time span 1981 to 1995 are illustrated. Typical energy cost range from about $300 to $2000 per kW-hr, with an average of about $800 per kW-hr for long-duration missions. At these levels, the cost of the required energy would be several billion dollars per year by about 1985 and might constrain the number and types of NASA programs to be carried out. NASA is extensively pursuing approaches for reducing nonrecurring costs. Two programs are presented for the development of an economical approach to space power systems. They are: (1) Economical Orbital Power (ECOP) with the objective to demonstrate the applicability of a commercial approach to the development of a low cost photovoltaic space power system; and (2) Space Power Experiment (SPEX) which has the objective to demonstrate the application of industrial hardware for space power systems.

  16. Energy and the Scaling of Animal Space Use.

    Science.gov (United States)

    Tamburello, Natascia; Côté, Isabelle M; Dulvy, Nicholas K

    2015-08-01

    Daily animal movements are usually limited to a discrete home range area that scales allometrically with body size, suggesting that home-range size is shaped by metabolic rates and energy availability across species. However, there is little understanding of the relative importance of the various mechanisms proposed to influence home-range scaling (e.g., differences in realm productivity, thermoregulation, locomotion strategy, dimensionality, trophic guild, and prey size) and whether these extend beyond the commonly studied birds and mammals. We derive new home-range scaling relationships for fishes and reptiles and use a model-selection approach to evaluate the generality of home-range scaling mechanisms across 569 vertebrate species. We find no evidence that home-range allometry varies consistently between aquatic and terrestrial realms or thermoregulation strategies, but we find that locomotion strategy, foraging dimension, trophic guild, and prey size together explain 80% of the variation in home-range size across vertebrates when controlling for phylogeny and tracking method. Within carnivores, smaller relative prey size among gape-limited fishes contributes to shallower scaling relative to other predators. Our study reveals how simple morphological traits and prey-handling ability can profoundly influence individual space use, which underpins broader-scale patterns in the spatial ecology of vertebrates.

  17. Potency-scaled partitioning in descriptor spaces with increasing dimensionality.

    Science.gov (United States)

    Bajorath, Jürgen

    2005-01-01

    Partitioning algorithms are described that operate in chemical reference spaces formed by combinations of binary-transformed molecular descriptors and aim at the identification of potent hits in ligand-based virtual screening. One of these approaches depends on mapping of consensus positions of compound activity sets in descriptor spaces followed by step-wise extension of the dimensionality of these spaces and re-mapping of activity-dependent consensus positions. Dimension extension is carried out to increase the discriminatory power of descriptor combinations and distinguish database compounds from potential hits. This method was originally named Dynamic Mapping of Consensus positions (DMC) and subsequently extended in order to take different potency levels of known active molecules into account and increase the probability of recognizing potent database hits. The extension was accomplished by adding potency scaling to DMC calculations, and the resulting approach was termed POT-DMC. Results of comparisons of DMC and POT-DMC calculations on different classes of active compounds with substantially varying potency levels support the validity of the POT-DMC approach.

  18. Cosmonaut Krikalev Views Approaching Space Shuttle Atlantis

    Science.gov (United States)

    2001-01-01

    Cosmonaut Sergei K. Krikalev, flight engineer for Expedition One, is positioned by a porthole aboard the Zvezda Service Module of the International Space Station (ISS) as the Space Shuttle Atlantis approaches for docking to begin several days of joint activities between the two crews. Visible through the window are the crew cabin and forward section of the Shuttle amidst scattered clouds above the Western Pacific. The aft part of the cargo bay stowing the Destiny Laboratory is not visible in this scene.

  19. A multi-method proposal to study the Public Space in the Scale of Neighborhood from a Transactional Approach Una propuesta Multimétodo para un Abordaje Transaccional del Espacio Público en la Escala de Barrio

    Directory of Open Access Journals (Sweden)

    Hector Rodrigo Berroeta Torres

    2012-03-01

    Full Text Available The transactional perspective is very attractive to analyze and act on public space on the scale of the neighborhood, however, language and methodological differences between the various disciplines involved make it quite complex. In this article we present a multi-method qualitative strategy of analysis that integrates graphical work spatialisation results, as an attempt to bring together the graphic and textual languages that dominate single-discipline approaches of public space.
    Various techniques were triangulated and underwent the same analytical process (Grounded Theory, which supported by the processing software Atlas / ti and made it possible to link aspects Arcgis graphics (maps and images with comments from researchers and biographical accounts of the participants, associating in this manner, specific physical spaces with the development and construction of spatial meanings and uses.La perspectiva transaccional resulta muy atractiva para analizar y actuar sobre el espacio público en la escala de barrio, sin embargo, las diferencias metodológicas y de lenguaje entre las distintas disciplinas implicadas lo hacen bastante complejo. En este artículo queremos presentar una estrategia cualitativa de análisis multimetodo, que integra un trabajo de espacialización gráfica de resultados, como una tentativa de acercar los lenguajes gráficos y textuales que dominan los abordajes unidisciplinares del espacio público.
    Se triangularon diversas técnicas y se sometieron a un mismo proceso analítico (Grounded Theory, lo que apoyado por los programas de procesamiento Atlas/ti y Arcgis hicieron posible ligar aspectos gráficos (mapas e imágenes con observaciones de los investigadores y narraciones biográficas de los participantes, asociando de esta manera, espacios físicos concretos con la evolución y construcción de significados y usos espaciales.

  20. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...

  1. Ab interno approach to the suprachoroidal space.

    Science.gov (United States)

    Bailey, Andrew K; Sarkisian, Steven R; Vold, Steven D

    2014-08-01

    Although glaucoma filtration surgery options are generally effective means of surgically lowering intraocular pressure (IOP), complications have driven innovators to develop novel approaches to the lowering of IOP. Using the suprachoroidal space has been one novel way to avoid these complications. This article reviews how recent innovation has exploited this space to lower IOP. Dr. Sarkisian and Dr. Vold have received research support from Transcend Medical, Inc. and Glaukos Corp. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  2. The space-scale cube : An integrated model for 2D polygonal areas and scale

    NARCIS (Netherlands)

    Meijers, B.M.; Van Oosterom, P.J.M.

    2011-01-01

    This paper introduces the concept of a space-scale partition, which we term the space-scale cube – analogous with the space-time cube (first introduced by Hägerstrand, 1970). We take the view of ‘map generalization is extrusion of 2D data into the third dimension’ (as introduced by Vermeij et al.,

  3. Modeling the uncertainty associated with the observation scale of space/time natural processes

    Science.gov (United States)

    Lee, S.; Serre, M.

    2005-12-01

    In many mapping applications of spatiotemporally distributed hydrological processes, the traditional space/time Geostatistics approaches have played a significant role to estimate a variable of interest at unsampled locations. Measured values are usually sparsely located over space and time due to the difficulty and cost of obtaining data. In some cases, the data for the hydrological variable of interest may have been collected at different temporal or spatial observation scales. Even though mixing data measured at different space/time scales may alleviate the problem of the sparsity of the data available, it essentially disregards the scale effect of estimation results. The importance of the scale effect must be recognized since a variable displays different physical properties depending on the spatial or temporal scale at which it is observed. In this study we develop a mathematical framework to derive the conditional Probability Density Function (PDF) of a variable at the local scale given an observation of that variable at a larger spatial or temporal scale, which properly models the uncertainty associated with the different observations scales of space/time natural processes. The developed framework allows to efficiently mix data observed at a variety of scales by accounting for data uncertainty associated with each observation scale present, and therefore generates soft data rigorously assimilated in the Bayesian Maximum Entropy (BME) method of modern Geostatistics to increase the mapping accuracy of the map at the scale of interest. We investigate the proposed approach with synthetic case studies involving observations of a space/time process at a variety of temporal and spatial scales. These case studies demonstrate the power of the proposed approach by leading to a set of maps with a noticeable increase of mapping accuracy over classical approaches not accounting for the scale effects. Hence the proposed approach will be useful for a wide variety of

  4. Algebraic Framework for Linear and Morphological Scale-Spaces

    NARCIS (Netherlands)

    Heijmans, H.J.A.M.; van den Boomgaard, R.

    2002-01-01

    This paper proposes a general algebraic construction technique for image scale-spaces. The basic idea is to first downscale the image by some factor using an invertible scaling, then apply an image operator (linear or morphological) at a unit scale, and finally resize the image to its original

  5. Algebraic framework for linear and morphological scale-spaces

    NARCIS (Netherlands)

    H.J.A.M. Heijmans (Henk); R. van den Boomgaard

    2000-01-01

    textabstractThis paper proposes a general algebraic construction technique for image scale-spaces. The basic idea is to first downscale the image by some factor using an invertible scaling, then apply an image operator (linear or morphological) at a unit scale, and finally resize the image to its

  6. An Application Of The Theory Of Scale Of Banach Spaces

    Directory of Open Access Journals (Sweden)

    Dawidowski Łukasz

    2015-09-01

    Full Text Available The abstract Cauchy problem on scales of Banach space was considered by many authors. The goal of this paper is to show that the choice of the space on scale is significant. We prove a theorem that the selection of the spaces in which the Cauchy problem ut − Δu = u|u|s with initial–boundary conditions is considered has an influence on the selection of index s. For the Cauchy problem connected with the heat equation we will study how the change of the base space influents the regularity of the solutions.

  7. Adaptive image denoising using scale and space consistency.

    Science.gov (United States)

    Scharcanski, Jacob; Jung, Cláudio R; Clarke, Robin T

    2002-01-01

    This paper proposes a new method for image denoising with edge preservation, based on image multiresolution decomposition by a redundant wavelet transform. In our approach, edges are implicitly located and preserved in the wavelet domain, whilst image noise is filtered out. At each resolution level, the image edges are estimated by gradient magnitudes (obtained from the wavelet coefficients), which are modeled probabilistically, and a shrinkage function is assembled based on the model obtained. Joint use of space and scale consistency is applied for better preservation of edges. The shrinkage functions are combined to preserve edges that appear simultaneously at several resolutions, and geometric constraints are applied to preserve edges that are not isolated. The proposed technique produces a filtered version of the original image, where homogeneous regions appear separated by well-defined edges. Possible applications include image presegmentation, and image denoising.

  8. Improved kernel correlation filter tracking with Gaussian scale space

    Science.gov (United States)

    Tan, Shukun; Liu, Yunpeng; Li, Yicui

    2016-10-01

    Recently, Kernel Correlation Filter (KCF) has achieved great attention in visual tracking filed, which provide excellent tracking performance and high possessing speed. However, how to handle the scale variation is still an open problem. In this paper, focusing on this issue that a method based on Gaussian scale space is proposed. First, we will use KCF to estimate the location of the target, the context region which includes the target and its surrounding background will be the image to be matched. In order to get the matching image of a Gaussian scale space, image with Gaussian kernel convolution can be gotten. After getting the Gaussian scale space of the image to be matched, then, according to it to estimate target image under different scales. Combine with the scale parameter of scale space, for each corresponding scale image performing bilinear interpolation operation to change the size to simulate target imaging at different scales. Finally, matching the template with different size of images with different scales, use Mean Absolute Difference (MAD) as the match criterion. After getting the optimal matching in the image with the template, we will get the best zoom ratio s, consequently estimate the target size. In the experiments, compare with CSK, KCF etc. demonstrate that the proposed method achieves high improvement in accuracy, is an efficient algorithm.

  9. Modifying patch-scale connectivity to initiate landscape change: an experimental approach to link scales

    Science.gov (United States)

    Peters, D. P.; Herrick, J.; Okin, G. S.; Pillsbury, F. C.; Duniway, M.; Vivoni, E. R.; Sala, O.; Havstad, K.; Monger, H. C.; Yao, J.; Anderson, J.

    2011-12-01

    Nonlinear interactions and feedbacks across spatial and temporal scales are common features of biological and physical systems. These emergent behaviors often result in surprises that challenge the ability of scientists to understand and predict system behavior at one scale based on information at finer or broader scales. Changes in ecosystem states under directional changes in climate represent a class of challenging dynamics of particular significance in many terrestrial ecosystems of the world. We are focusing on one system of global relevance and importance (conversion of arid grasslands to degraded shrublands). We are using a novel, multi-scale manipulative experiment to understand the key processes governing state changes, and to test specific hypotheses about how patterns and processes interact across scales to potentially reverse shrublands to grasslands or to other alternative states. We are using this experiment combined with simulation models to address two questions: (1) At what spatial scales do fine-scale processes propagate to exhibit broad-scale impacts? (2) At what spatial scales do broad-scale drivers overwhelm fine-scale processes? In this experiment, we initiate grass-soil feedbacks via the redistribution of resources at the plant and patch scale using Connectivity Modifiers (ConMods). These patterns are expected to propagate through time and space to influence grass dominance at the landscape scale with implications for regional scale land-atmosphere interactions. Initial results show that ConMods are effective in reducing horizontal water redistribution, and increasing local water availability to result in recruitment and growth of grasses and other herbaceous plants. We are integrating this information with a suite of process-based ecosystem-hydrologic-aeolian-atmospheric simulation models to investigate threshold dynamics and feedbacks across scales, and to predict alternative states under climate change. We believe this cross-scale approach

  10. Collective Space-Sensing Coordinates Pattern Scaling in Engineered Bacteria

    National Research Council Canada - National Science Library

    Cao, Yangxiaolu; Ryser, Marc D; Payne, Stephen; Li, Bochong; Rao, Christopher V; You, Lingchong

    2016-01-01

    .... We found that the ring width exhibits perfect scale invariance to the colony size. Our analysis revealed a collective space-sensing mechanism, which entails sequential actions of an integral feedback loop and an incoherent feedforward loop...

  11. Dissipative fragmentation in a phase space approach

    Energy Technology Data Exchange (ETDEWEB)

    Adorno, A.; Di Toro, M.; Bonasera, A.; Gregoire, C.; Gulminelli, F.

    Semi-classical approaches have evidenced the role of one and two-body dissipation in nucleus-nucleus collisions. On the other hand, a substantial energy dissipation and some angular momentum transfer have been observed at moderate energy where a fragmentation process is the dominant reaction mechanism. In order to analyse main features of these reactions, we developed a phenomenological model taking into account phase space constraints. The transition between deep inelastic collisions and abrasion-like fragmentation is described and a general agreement with available data is found.

  12. Axiomatic approaches to Stevens' magnitude scaling

    DEFF Research Database (Denmark)

    Zimmer, Karin; Ellermeier, Wolfgang

    2006-01-01

    & Faulhammer, 2000), the authors found commutativity to hold and multiplicativity to fail in the majority of listeners, leading to the conclusion that, while respondents seem to be able to base their judgments on a ratio-scale of sensation strength, the numerals used in the assessments do not correspond...... and experimental tasks on the one hand, and on theories that are relaxing these axioms which are inherent in Stevens’ approach, on the other....

  13. Step scaling in coordinate space. Running of the quark mass

    Energy Technology Data Exchange (ETDEWEB)

    Cichy, Krzysztof [Frankfurt Univ., Frankfurt am Main (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Korcyl, Piotr [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik

    2016-12-15

    We perform a benchmark study of the step scaling procedure for the ratios of renormalization constants extracted from position space correlation functions. We work in the quenched approximation and consider the pseudoscalar, scalar, vector and axial vector bilinears. The pseudoscalar/scalar cases allow us to obtain the non-perturbative running of the quark mass over a wide range of energy scales - from around 17 GeV to below 1.5 GeV - which agrees well with the 4-loop prediction of continuum perturbation theory. We find that step scaling is feasible in X-space and we discuss its advantages and potential problems.

  14. Self-aggregation in scaled principal component space

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris H.Q.; He, Xiaofeng; Zha, Hongyuan; Simon, Horst D.

    2001-10-05

    Automatic grouping of voluminous data into meaningful structures is a challenging task frequently encountered in broad areas of science, engineering and information processing. These data clustering tasks are frequently performed in Euclidean space or a subspace chosen from principal component analysis (PCA). Here we describe a space obtained by a nonlinear scaling of PCA in which data objects self-aggregate automatically into clusters. Projection into this space gives sharp distinctions among clusters. Gene expression profiles of cancer tissue subtypes, Web hyperlink structure and Internet newsgroups are analyzed to illustrate interesting properties of the space.

  15. Space Sustainment: A New Approach for America in Space

    Science.gov (United States)

    2014-12-01

    permeate all aspects of US policy, yet the history of American activity in space seems to indicate otherwise. Accessing and exploiting space involves...highly specialized technologies, astronomi - cally high costs, and considerable risk of failure. In the formative years for space, these...No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing

  16. Multi-scale Dynamical Processes in Space and Astrophysical Plasmas

    CERN Document Server

    Vörös, Zoltán; IAFA 2011 - International Astrophysics Forum 2011 : Frontiers in Space Environment Research

    2012-01-01

    Magnetized plasmas in the universe exhibit complex dynamical behavior over a huge range of scales. The fundamental mechanisms of energy transport, redistribution and conversion occur at multiple scales. The driving mechanisms often include energy accumulation, free-energy-excited relaxation processes, dissipation and self-organization. The plasma processes associated with energy conversion, transport and self-organization, such as magnetic reconnection, instabilities, linear and nonlinear waves, wave-particle interactions, dynamo processes, turbulence, heating, diffusion and convection represent fundamental physical effects. They demonstrate similar dynamical behavior in near-Earth space, on the Sun, in the heliosphere and in astrophysical environments. 'Multi-scale Dynamical Processes in Space and Astrophysical Plasmas' presents the proceedings of the International Astrophysics Forum Alpbach 2011. The contributions discuss the latest advances in the exploration of dynamical behavior in space plasmas environm...

  17. Nonlinear scaling of space use in human hunter–gatherers

    Science.gov (United States)

    Hamilton, Marcus J.; Milne, Bruce T.; Walker, Robert S.; Brown, James H.

    2007-01-01

    Use of space by both humans and other mammals should reflect underlying physiological, ecological, and behavioral processes. In particular, the space used by an individual for its normal activities should reflect the interplay of three constraints: (i) metabolic resource demand, (ii) environmental resource supply, and (iii) social behaviors that determine the extent to which space is used exclusively or shared with other individuals. In wild mammals, there is an allometric scaling relation between the home range of an individual and its body size: Larger mammals require more space per individual, but this relation is additionally modified by productivity of the environment, trophic niche, sociality, and ability to defend a territory [Kelt DA, Van Vuren D (1999) Ecology 80: 337–340; Kelt DA, Van Vuren D (2001) Am Nat 157:637–645; Haskell JP, Ritchie ME, Olff H (2002) Nature 418:527–530; Damuth J (1987) Biol J Linn Soc 31:193–246; Damuth J (1981) Nature 290:699–700; and other previously published work]. In this paper we show how similar factors affect use of space by human hunter–gatherers, resulting in a nonlinear scaling relation between area used per individual and population size. The scaling exponent is less than one, so the area required by an average individual decreases with increasing population size, because social networks of material and information exchange introduce an economy of scale. PMID:17360598

  18. Multi-Scale Singularity Trees: Soft-Linked Scale-Space Hierarchies

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    We consider images as manifolds embedded in a hybrid of a high dimensional space of coordinates and features. Using the proposed energy functional and mathematical landmarks, images are partitioned into segments. The nesting of image segments occurring at catastrophe points in the scale-space is ...

  19. SPACE Approach to Concrete's Space Structure and its Mechanical Properties

    NARCIS (Netherlands)

    Stroeven, P.; Stroeven, M.

    2001-01-01

    Structural properties of particulate materials can be described in densities of the particle packing, more generally denoted as particle composition. Obviously, this global measure does not offer information on the way particles are mutually arranged in space. This is associated with particle

  20. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  1. Real Space Approach to CMB deboosting

    CERN Document Server

    Yoho, Amanda; Starkman, Glenn D.; Pereira, Thiago S.

    2013-01-01

    The effect of our Galaxy's motion through the Cosmic Microwave Background rest frame, which aberrates and Doppler shifts incoming photons measured by current CMB experiments, has been shown to produce mode-mixing in the multipole space temperature coefficients. However, multipole space determinations are subject to many difficulties, and a real-space analysis can provide a straightforward alternative. In this work we describe a numerical method for removing Lorentz- boost effects from real-space temperature maps. We show that to deboost a map so that one can accurately extract the temperature power spectrum requires calculating the boost kernel at a finer pixelization than one might naively expect. In idealized cases that allow for easy comparison to analytic results, we have confirmed that there is indeed mode mixing among the spherical harmonic coefficients of the temperature. We find that using a boost kernel calculated at Nside=8192 leads to a 1% bias in the binned boosted power spectrum at l~2000, while ...

  2. Space Station overall management approach for operations

    Science.gov (United States)

    Paules, G.

    1986-01-01

    An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined.

  3. A Psychosocial Approach to Understanding Underground Spaces

    Directory of Open Access Journals (Sweden)

    Eun H. Lee

    2017-03-01

    Full Text Available With a growing need for usable land in urban areas, subterranean development has been gaining attention. While construction of large underground complexes is not a new concept, our understanding of various socio-cultural aspects of staying underground is still at a premature stage. With projected emergence of underground built environments, future populations may spend much more of their working, transit, and recreational time in underground spaces. Therefore, it is essential to understand the challenges and advantages that such environments have to improve the future welfare of users of underground spaces. The current paper discusses various psycho-social aspects of underground spaces, the impact they can have on the culture shared among the occupants, and possible solutions to overcome some of these challenges.

  4. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  5. Zooming in on Spatial Scaling: Preschool Children and Adults Use Mental Transformations to Scale Spaces

    Science.gov (United States)

    Möhring, Wenke; Newcombe, Nora S.; Frick, Andrea

    2014-01-01

    Spatial scaling is an important prerequisite for many spatial tasks and involves an understanding of how distances in different-sized spaces correspond. Previous studies have found evidence for such an understanding in preschoolers; however, the mental processes involved remain unclear. In the present study, we investigated whether children and…

  6. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  7. A vector space approach to geometry

    CERN Document Server

    Hausner, Melvin

    2010-01-01

    The effects of geometry and linear algebra on each other receive close attention in this examination of geometry's correlation with other branches of math and science. In-depth discussions include a review of systematic geometric motivations in vector space theory and matrix theory; the use of the center of mass in geometry, with an introduction to barycentric coordinates; axiomatic development of determinants in a chapter dealing with area and volume; and a careful consideration of the particle problem. 1965 edition.

  8. A METHODOLOGICAL APPROACH FOR READING URBAN OPEN SPACE.

    Directory of Open Access Journals (Sweden)

    Manal Al-Bishawi

    2011-03-01

    Full Text Available This paper suggests a methodological approach for reading and analyzing urban open spaces, based on the concept of behavioral setting, which deals with individuals and their behavior as a tool for reading the urban open space. The behavioral setting is defined as the smallest living entity in the physical environment and has three main components: physical (design, social (use and cultural (rules. Based on that, the urban open space, as a part of the physical environment, can be considered as either one setting or a system of settings, according to the activities that take place within it and the users who occupy it. The proposed methodological approach will be discussed through the theoretical analysis of various studies on the physical form of urban open spaces. The approach is expected to help planners and architects in developing and providing urban spaces that comply with people’s needs and values.

  9. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  10. Scaling exponents in space plasmas: a fractional Levy model

    Science.gov (United States)

    Watkins, N. W.; Credgington, D.; Hnat, B.; Chapman, S. C.; Freeman, M. P.; Greenhough, J.

    Mandelbrot introduced the concept of fractals to describe the non-Euclidean shape of many aspects of the natural world In the time series context he proposed the use of fractional Brownian motion fBm to model non-negligible temporal persistence the Joseph Effect and Levy flights to quantify large discontinuities the Noah Effect In space physics the effects are manifested as intermittency and long-range correlation well-established features of geomagnetic indices and their solar wind drivers In order to capture and quantify the Noah and Joseph effects in one compact model we propose the application of a bridge -fractional Levy motion fLm -to space physics We perform an initial evaluation of some previous scaling results in this paradigm and show how fLm can model the previously observed exponents physics 0509058 in press Space Science Reviews We discuss the similarities and differences between fLm and ambivalent processes based on fractional kinetic equations e g Brockmann et al Nature 2006 and suggest some new directions for the future

  11. Large-scale approaches for glycobiology

    OpenAIRE

    Christopher T. Campbell; Yarema, Kevin J.

    2005-01-01

    Glycosylation, the attachment of carbohydrates to proteins and lipids, influences many biological processes. Despite detailed characterization of the cellular components that carry out glycosylation, a complete picture of a cell's glycoconjugates remains elusive because of the challenges inherent in characterizing complex carbohydrates. This article reviews large-scale techniques for accelerating progress in glycobiology.

  12. Robust control of UAVs using the parameter space approach

    NARCIS (Netherlands)

    Abdelmoeti, Samer; Carloni, Raffaella

    2016-01-01

    In this paper a robust PID controller for quadrotor unmanned aerial vehicles is proposed that uses the pa- rameter space approach. Stability and robustness analyses are carried out in the controller parameter space to determine a set of stable controller gains that guarantee also robustness against

  13. Transmission of chirality through space and across length scales

    Science.gov (United States)

    Morrow, Sarah M.; Bissette, Andrew J.; Fletcher, Stephen P.

    2017-05-01

    Chirality is a fundamental property and vital to chemistry, biology, physics and materials science. The ability to use asymmetry to operate molecular-level machines or macroscopically functional devices, or to give novel properties to materials, may address key challenges at the heart of the physical sciences. However, how chirality at one length scale can be translated to asymmetry at a different scale is largely not well understood. In this Review, we discuss systems where chiral information is translated across length scales and through space. A variety of synthetic systems involve the transmission of chiral information between the molecular-, meso- and macroscales. We show how fundamental stereochemical principles may be used to design and understand nanoscale chiral phenomena and highlight important recent advances relevant to nanotechnology. The survey reveals that while the study of stereochemistry on the nanoscale is a rich and dynamic area, our understanding of how to control and harness it and dial-up specific properties is still in its infancy. The long-term goal of controlling nanoscale chirality promises to be an exciting journey, revealing insight into biological mechanisms and providing new technologies based on dynamic physical properties.

  14. Distress vs. Non-Distress Approach and the Personal Space of Masculine, Feminine, and Androgynous Subjects.

    Science.gov (United States)

    Glisson, Pamela A.; Thomas, Georgelle

    Examined was the relationship between personal space and sex roles. Feminine females (N=25), androgynous females (N=25) and masculine males (N=25) viewed a film of male and female approaching stimulus persons in distress and non-distress conditions. Subjects marked the Comfortable Interpersonal Distance Scale at the point where they would prefer…

  15. a Web Service Approach for Linking Sensors and Cellular Spaces

    Science.gov (United States)

    Isikdag, U.

    2013-09-01

    More and more devices are starting to be connected to the Internet. In the future the Internet will not only be a communication medium for people, it will in fact be a communication environment for devices. The connected devices which are also referred as Things will have an ability to interact with other devices over the Internet, i.) provide information in interoperable form and ii.) consume /utilize such information with the help of sensors embedded in them. This overall concept is known as Internet-of- Things (IoT). This requires new approaches to be investigated for system architectures to establish relations between spaces and sensors. The research presented in this paper elaborates on an architecture developed with this aim, i.e. linking spaces and sensors using a RESTful approach. The objective is making spaces aware of (sensor-embedded) devices, and making devices aware of spaces in a loosely coupled way (i.e. a state/usage/function change in the spaces would not have effect on sensors, similarly a location/state/usage/function change in sensors would not have any effect on spaces). The proposed architecture also enables the automatic assignment of sensors to spaces depending on space geometry and sensor location.

  16. A State Space Approach to Canonical Factorization with Applications

    CERN Document Server

    Bart, Harm; Kaashoek, MA; Ran, Andre CM

    2010-01-01

    The present book deals with canonical factorization of matrix and operator functions that appear in state space form or that can be transformed into such a form. A unified geometric approach is used. The main results are all expressed explicitly in terms of matrices or operators, which are parameters of the state space representation. The applications concern different classes of convolution equations. A large part the book deals with rational matrix functions only.

  17. Turkish validation of the Emotional Approach Coping Scale.

    Science.gov (United States)

    Durak, Mithat; Senol-Durak, Emre

    2011-08-01

    The Emotional Approach Coping Scale is frequently used to assess coping, which consists of emotional processing and emotional expression. The present aim was to examine the psychometric properties of this scale by utilizing two independent samples: university students (n = 481) and community members (n = 284). Based on goodness-of-fit indices in confirmatory factor analysis, a two-factor model yielded significant findings in these samples. The results of multi-group analysis revealed that the theoretical structure of the dispositional Emotional Approach Coping Scale was the same for men and women. In addition to sufficient internal consistency and test-retest reliability, the relationships between the Emotional Approach Coping Scale and five conceptually related measures (coping styles, positive affect, negative affect, depression, and trait anxiety) demonstrated concurrent validity. Furthermore, the present study provides a map of emotional approach coping styles in a non-Western culture.

  18. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  19. Healthy campus by open space design: Approaches and guidelines

    Directory of Open Access Journals (Sweden)

    Stephen Siu Yu Lau

    2014-12-01

    Full Text Available This paper examines the architectural and landscape design strategies and intentions for green, open spaces facilities targeting stress alleviation for learning environments such as those of university campuses in a compact urban setting. Literature reviews provide three prevailing perspectives for physical design pedagogical operatives: healing gardens where greenery and plants produce restorative effects; flexible spaces that accommodate functional needs of different activities; and green buildings that incorporate open space as a catalyst for integrated eco-system. Corresponding design approaches (landscape design, spatial design and green design are scrutinized by case study. A comparison of two university campuses with different urban contexts is conducted to identify challenges and opportunities for applying these design approaches. For a compact campus, high-dense surroundings may limit the size of an open space and may handicap circulation and accessibility; on the other side, a small open space may provide its users more intimate contact with natural restorative elements and also a more controllable microclimate for physical comfort. A healthy campus should encompass diverse open spaces to satisfy different purposes. Finally, a framework that integrates the three approaches is combined to produce a sustainable design rubric.

  20. A groupoid approach to spaces of generalized connections

    Science.gov (United States)

    Velhinho, J. M.

    2002-02-01

    The quantum completion Ā of the space of connections in a manifold can be seen as the set of all morphisms from the groupoid of the edges of the manifold to the (compact) gauge group. This algebraic construction generalizes an analogous description of the gauge-invariant quantum configuration space overlineA/ G of Ashtekar and Isham, clarifying the relation between the two spaces. We present a description of the groupoid approach which brings the gauge-invariant degrees of freedom to the foreground, thus making the action of the gauge group more transparent.

  1. Interpreting large-scale redshift-space distortion measurements

    Science.gov (United States)

    Samushia, L.; Percival, W. J.; Raccanelli, A.

    2012-03-01

    The simplest theory describing large-scale redshift-space distortions (RSD), based on linear theory and distant galaxies, depends on the growth of cosmological structure, suggesting that strong tests of general relativity can be constructed from galaxy surveys. As data sets become larger and the expected constraints more precise, the extent to which the RSD follow the simple theory needs to be assessed in order that we do not introduce systematic errors into the tests by introducing inaccurate simplifying assumptions. We study the impact of the sample geometry, non-linear processes and biases induced by our lack of understanding of the radial galaxy distribution on RSD measurements. Using Large Suite of Dark Matter Simulations of the Sloan Digital Sky Survey II (SDSS-II) luminous red galaxy data, these effects are shown to be important at the level of 20 per cent. Including them, we can accurately model the recovered clustering in these mock catalogues on scales 30-200 h-1 Mpc. Applying this analysis to robustly measure parameters describing the growth history of the Universe from the SDSS-II data gives f(z= 0.25)σ8(z= 0.25) = 0.3512 ± 0.0583 and f(z= 0.37)σ8(z= 0.37) = 0.4602 ± 0.0378 when no prior is imposed on the growth rate, and the background geometry is assumed to follow a Λ cold dark matter (ΛCDM) model with the Wilkinson Microwave Anisotropy Probe (WMAP)+Type Ia supernova priors. The standard WMAP constrained ΛCDM model with general relativity predicts f(z= 0.25)σ8(z= 0.25) = 0.4260 ± 0.0141 and f(z= 0.37)σ8(z= 0.37) = 0.4367 ± 0.0136, which is fully consistent with these measurements.

  2. A Banach Space Regularization Approach for Multifrequency Microwave Imaging

    Directory of Open Access Journals (Sweden)

    Claudio Estatico

    2016-01-01

    Full Text Available A method for microwave imaging of dielectric targets is proposed. It is based on a tomographic approach in which the field scattered by an unknown target (and collected in a proper observation domain is inverted by using an inexact-Newton method developed in Lp Banach spaces. In particular, the extension of the approach to multifrequency data processing is reported. The mathematical formulation of the new method is described and the results of numerical simulations are reported and discussed, analyzing the behavior of the multifrequency processing technique combined with the Banach spaces reconstruction method.

  3. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  4. A conceptual framework for time and space scale interactions in the climate system

    Energy Technology Data Exchange (ETDEWEB)

    Meehl, G.A. [National Center for Atmospheric Research (United States); Lukas, R. [University of Hawaii (United States); Kiladis, G.N. [NOAA Aeronomy Lab (United States); Weickmann, K.M. [NOAA Climate Diagnostics Center (United States); Matthews, A.J. [University of East Anglia, Norwich (United Kingdom); Wheeler, M. [Bureau of Meteorology Research Centre (Australia)

    2001-07-01

    Interactions involving various time and space scales, both within the tropics and between the tropics and midlatitudes, are ubiquitous in the climate system. We propose a conceptual framework for understanding such interactions whereby longer time scales and larger space scales set the base state for processes on shorter time scales and smaller space scales, which in turn have an influence back on the longer time scales and larger space scales in a continuum of process-related interactions. Though not intended to be comprehensive, we do cite examples from the literature to provide evidence for the validity of this framework. Decadal time scale base states of the coupled climate system set the context for the manifestation of interannual time scales (El Nino/Southern Oscillation, ENSO and tropospheric biennial oscillation, TBO) which are influenced by and interact with the annual cycle and seasonal time scales. Those base states in turn influence the large-scale coupled processes involved with intraseasonal and submonthly time scales, tied to interactions within the tropics and extratropics, and tropical-midlatitude teleconnections. All of these set the base state for processes on the synoptic and mesoscale and regional/local space scales. Events at those relatively short time scales and small space scales may then affect the longer time scale and larger space scale processes in turn, reaching back out to submonthly, intraseasonal, seasonal, annual, TBO, ENSO and decadal. Global coupled models can capture some elements of the decadal, ENSO, TBO, annual and seasonal time scales with the associated global space scales. However, coupled models are less successful at simulating phenomena at subseasonal and shorter time scales with hemispheric and smaller space scales. In the context of the proposed conceptual framework, the synergistic interactions of the time and space scales suggest that a high priority must be placed on improved simulations of all of the time and

  5. Time and space scales for measuring urban growth

    Directory of Open Access Journals (Sweden)

    Denise Pumain

    2002-07-01

    Full Text Available After the last two centuries of intense and unprecedented urbanization, we need a clear understanding of the ongoing trends of urban growth for a better insight of their possible future. Two main processes are analysed : first, the inter-urban concentration of population, with its consequence of a relative decline of the smallest towns ; second, urban sprawl, including a shift of population density in the central part of cities towards their peripheries. Using recent data on population and employment in French urban functional areas, we show that the spatial and temporal framework in which urban growth is computed may considerably alter the results and their consecutive interpretation. When the city is defined as a continuously built-up area (the French agglomération, the measurement of concentration of urban population does not give the same result than when using as a definition the spatial scale of functional urban areas (the French aires urbaines. Similarly, the analyse of the spatial redistribution of population between urban centres, close and outer suburbs are reconsidered under trends of a longer duration has been given very different results, according to the way of defining cities and spaces.

  6. Hierarchical Dynamics of Ecological Communities: Do Scales of Space and Time Match?

    Science.gov (United States)

    Angeler, David G.; Göthe, Emma; Johnson, Richard K.

    2013-01-01

    Theory posits that community dynamics organize at distinct hierarchical scales of space and time, and that the spatial and temporal patterns at each scale are commensurate. Here we use time series modeling to investigate fluctuation frequencies of species groups within invertebrate metacommunities in 26 boreal lakes over a 20-year period, and variance partitioning analysis to study whether species groups with different fluctuation patterns show spatial signals that are commensurate with the scale-specific fluctuation patterns identified. We identified two groups of invertebrates representing hierarchically organized temporal dynamics: one species group showed temporal variability at decadal scales (slow patterns of change), whilst another group showed fluctuations at 3 to 5-year intervals (faster change). This pattern was consistently found across all lakes studied. A spatial signal was evident in the slow but not faster-changing species groups. As expected, the spatial signal for the slow-changing group coincided with broad-scale spatial patterns that could be explained with historical biogeography (ecoregion delineation, and dispersal limitation assessed through a dispersal trait analysis). In addition to spatial factors, the slow-changing groups correlated with environmental variables, supporting the conjecture that boreal lakes are undergoing environmental change. Taken together our results suggest that regionally distinct sets of taxa, separated by biogeographical boundaries, responded similarly to broad-scale environmental change. Not only does our approach allow testing theory about hierarchically structured space-time patterns; more generally, it allows assessing the relative role of the ability of communities to track environmental change and dispersal constraints limiting community structure and biodiversity at macroecological scales. PMID:23874905

  7. Outlier Detection In Linear Regression Using Standart Parity Space Approach

    Science.gov (United States)

    Mustafa Durdag, Utkan; Hekimoglu, Serif

    2013-04-01

    Despite all technological advancements, outliers may occur due to some mistakes in engineering measurements. Before estimation of unknown parameters, aforementioned outliers must be detected and removed from the measurements. There are two main outlier detection methods: the conventional tests based on least square approach (e.g. Baarda, Pope etc.) and the robust tests (e.g. Huber, Hampel etc.) are used to identify outliers in a set of measurement. Standart Parity Space Approach is one of the important model-based Fault Detection and Isolation (FDI) technique that usually uses in Control Engineering. In this study the standart parity space method is used for outlier detection in linear regression. Our main goal is to compare success of two approaches of standart parity space method and conventional tests in linear regression through the Monte Carlo simulation with each other. The least square estimation is the most common estimator as known and it minimizes the sum of squared residuals. In standart parity space approach to eliminate unknown vector, the measurement vector projected onto the left null space of the coefficient matrix. Thus, the orthogonal condition of parity vector is satisfied and only the effects of noise vector noticed. The residual vector is derived from two cases that one is absence of an outlier; the other is occurrence of an outlier. Its likelihood function is used for determining the detection decision function for global Test. Localization decision function is calculated for each column of parity matrix and the maximum one of these values is accepted as an outlier. There are some results obtained from two different intervals that one of them is between 3σ and 6σ (small outlier) the other one is between 6σ and 12σ (large outlier) for outlier generator when the number of unknown parameter is chosen 2 and 3. The measure success rates (MSR) of Baarda's method is better than the standart parity space method when the confidence intervals are

  8. A simple coordinate space approach to three-body problems ...

    Indian Academy of Sciences (India)

    We show how to treat the dynamics of an asymmetric three-body system consisting of one heavy and two identical light particles in a simple coordinate space variational approach. The method is constructive and gives an efficient way of resolving a three-body system to an effective two-body system. It is illustrated by ...

  9. A simple coordinate space approach to three-body problems ...

    Indian Academy of Sciences (India)

    Abstract. We show how to treat the dynamics of an asymmetric three-body system consisting of one heavy and two identical light particles in a simple coordinate space variational approach. The method is constructive and gives an efficient way of resolving a three-body system to an effective two-body system. It is illustrated ...

  10. Predicting Space Weather Effects on Close Approach Events

    Science.gov (United States)

    Newman, L.; Besser, R.; Hejduk, M.

    The NASA Robotic Conjunction Assessment Risk Analysis (CARA) team sends ephemeris data to the Joint Space Operations Center (JSpOC) for screening against the high accuracy catalog, then assesses risk posed to protected assets from predicted close approaches. Since most spacecraft supported by the CARA team are located in LEO orbits, atmospheric drag is a primary source of state estimate uncertainty, and drag is directly governed by space weather. At present the actual effect of space weather on atmospheric density cannot be accurately predicted because most atmospheric density models are empirical in nature. The Jacchia-Bowman-HASDM 2009 atmospheric density model used at the JSpOC employs a solar storm active compensation feature that predicts storm sizes and arrival times, and thus the resulting neutral density alterations. With this feature, estimation errors can occur in either direction (i.e., over- or under-estimation of density and thus drag), giving rise to several questions. Does a change in space weather make a close approach safer or riskier? Might performing a maneuver make the approach worse due to uncertainty in predicted location at a given time? What if there are errors in the predicted timing or magnitude of the space weather event? Although the exact effect of a solar storm on atmospheric drag cannot be determined, one can explore the effects of drag perturbations on conjuncting objects' trajectories to determine if a conjunction can become riskier or less risky. The CARA team has constructed a Space Weather Trade-Space tool that systematically alters the drag coefficient of the conjuncting objects and recalculates the probability of collision for each case to determine the effect is likely to have on the collision risk. In addition to a review of the theory and the particulars of the tool, all of the observed output will be explained, along with statistics of their frequency.

  11. Nanotechnologies and regenerative medical approaches for space and terrestrial medicine.

    Science.gov (United States)

    Grattoni, Alessandro; Tasciotti, Ennio; Fine, Daniel; Fernandez-Moure, Joseph S; Sakamoto, Jason; Hu, Ye; Weiner, Bradley; Ferrari, Mauro; Parazynski, Scott

    2012-11-01

    One purpose of the International Space Station (ISS) is to explore powerful new areas of biomedical science in microgravity. Recent advances in nanotechnology applied to medicine--what we now refer to as nano-medicine--and regenerative medicine have enormous untapped potential for future space and terrestrial medical applications. Novel means for drug delivery and nanoscale screening tools will one day benefit astronauts venturing to Mars and places beyond, while the space laboratory will foster advances in nanotechnologies for diagnostic and therapeutic tools to help our patients here on Earth. Herein we review a series of nanotechnologies and selected regenerative medical approaches and highlight key areas of ongoing and future investigation that will benefit both space and terrestrial medicine. These studies target significant areas of human disease such as osteoporosis, diabetes, radiation injury, and many others.

  12. Alternative approaches to space-based power generation

    Science.gov (United States)

    Gregory, D. L.

    1977-01-01

    Satellite Power Stations (SPS) would generate electrical power in space for terrestrial use. Their geosynchronous orbit location permits continuous microwave power transmission to ground receiving antenna farms. Eight approaches to the generation of the electrical power to be transmitted were investigated. Configurations implementing these approaches were developed through an optimization process intended to yield the lowest cost for each. A complete program was baselined for each approach, identifying required production rates, quantities of launches, required facilities, etc. Each program was costed, including the associated launches, orbital assembly, and maintenance operations. The required electric power charges to amortize these costs were calculated. They range from 26 to 82 mills/kWh (ground busbar).

  13. Bayesian LASSO, scale space and decision making in association genetics.

    Science.gov (United States)

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  14. The scaling of green space coverage in European cities.

    Science.gov (United States)

    Fuller, Richard A; Gaston, Kevin J

    2009-06-23

    Most people on the planet live in dense aggregations, and policy directives emphasize green areas within cities to ameliorate some of the problems of urban living. Benefits of urban green spaces range from physical and psychological health to social cohesion, ecosystem service provision and biodiversity conservation. Green space coverage differs enormously among cities, yet little is known about the correlates or geography of this variation. This is important because urbanization is accelerating and the consequences for green space are unclear. Here, we use standardized major axis regression to explore the relationships between urban green space coverage, city area and population size across 386 European cities. We show that green space coverage increases more rapidly than city area, yet declines only weakly as human population density increases. Thus, green space provision within a city is primarily related to city area rather than the number of inhabitants that it serves, or a simple space-filling effect. Thus, compact cities (small size and high density) show very low per capita green space allocation. However, at high levels of urbanicity, the green space network is robust to further city compaction. As cities grow, interactions between people and nature depend increasingly on landscape quality outside formal green space networks, such as street plantings, or the size, composition and management of backyards and gardens.

  15. The Lesbian Internalized Homophobia Scale: a rational/theoretical approach.

    Science.gov (United States)

    Szymanski, D M; Chung, Y B

    2001-01-01

    This article reports the development and psychometric properties of a new scale that measures internalized homophobia in lesbians: the Lesbian Internalized Homophobia Scale (LIHS). This 52-item measure was developed using a rational/theoretical approach of test construction and includes five subscales. Research findings, based on a sample of 303 female participants, supported the reliability and validity of the LIHS in assessing internalized homophobia in lesbians. Implications for research and practice are discussed.

  16. Conceptual Design and Demonstration of Space Scale for Measuring Mass in Microgravity Environment

    Science.gov (United States)

    Kim, Youn-Kyu; Lee, Joo-Hee; Choi, Gi-Hyuk; Choi, Ik-Hyeon

    2015-12-01

    In this study, a new idea for developing a space scale for measuring mass in a microgravity environment was proposed by using the inertial force properties of an object to measure its mass. The space scale detected the momentum change of the specimen and reference masses by using a load-cell sensor as the force transducer based on Newton's laws of motion. In addition, the space scale calculated the specimen mass by comparing the inertial forces of the specimen and reference masses in the same acceleration field. By using this concept, a space scale with a capacity of 3 kg based on the law of momentum conservation was implemented and demonstrated under microgravity conditions onboard International Space Station (ISS) with an accuracy of ±1 g. By the performance analysis on the space scale, it was verified that an instrument with a compact size could be implemented and be quickly measured with a reasonable accuracy under microgravity conditions.

  17. Space Power Free-Piston Stirling Engine Scaling Study

    Science.gov (United States)

    Jones, D.

    1989-01-01

    The design feasibility study is documented of a single cylinder, free piston Stirling engine/linear alternator (FPSE/LA) power module generating 150 kW-electric (kW sub e), and the determination of the module's maximum feasible power level. The power module configuration was specified to be a single cylinder (single piston, single displacer) FPSE/LA, with tuning capacitors if required. The design requirements were as follows: (1) Maximum electrical power output; (2) Power module thermal efficiency equal to or greater than 20 percent at a specific mass of 5 to 8 kg/kW(sub e); (3) Heater wall temperature/cooler wall temperature = 1050 K/525 K; (4) Sodium heat-pipe heat transport system, pumped loop NaK (sodium-potassium eutectic mixture) rejection system; (5) Maximum power module vibration amplitude = 0.0038 cm; and (6) Design life = 7 years (60,000 hr). The results show that a single cylinder FPSE/LA is capable of meeting program goals and has attractive scaling attributes over the power range from 25 to 150 kW(sub e). Scaling beyond the 150 kW(sub e) power level, the power module efficiency falls and the power module specific mass reaches 10 kg/kW(sub e) at a power output of 500 kW(sub e). A discussion of scaling rules for the engine, alternator, and heat transport systems is presented, along with a detailed description of the conceptual design of a 150 kW(sub e) power module that meets the requirements. Included is a discussion of the design of a dynamic balance system. A parametric study of power module performance conducted over the power output range of 25 to 150 kW(sub e) for temperature ratios of 1.7, 2.0, 2.5, and 3.0 is presented and discussed. The results show that as the temperature ratio decreases, the efficiency falls and specific mass increases. At a temperature ratio of 1.7, the 150 kW(sub e) power module cannot satisfy both efficiency and specific mass goals. As the power level increases from 25 to 150 kW(sub e) at a fixed temperature ratio, power

  18. Design of diffractive optical elements for the fractional Fourier transform domain: phase-space approach.

    Science.gov (United States)

    Testorf, Markus

    2006-01-01

    Phase-space optics is used to relate the problem of designing diffractive optical elements for any first-order optical system to the corresponding design problem in the Fraunhofer diffraction regime. This, in particular, provides a novel approach for the fractional Fourier transform domain. For fractional Fourier transforms of arbitrary order, the diffractive element is determined as the optimum design computed for a generic Fourier transform system, scaled and modulated with a parabolic lens function. The phase-space description also identifies critical system parameters that limit the performance and applicability of this method. Numerical simulations of paraxial wave propagation are used to validate the method.

  19. The scaling of green space coverage in European cities

    OpenAIRE

    Fuller, Richard A.; Gaston, Kevin J.

    2009-01-01

    Most people on the planet live in dense aggregations, and policy directives emphasize green areas within cities to ameliorate some of the problems of urban living. Benefits of urban green spaces range from physical and psychological health to social cohesion, ecosystem service provision and biodiversity conservation. Green space coverage differs enormously among cities, yet little is known about the correlates or geography of this variation. This is important because urbanization is accelerat...

  20. A Percolation‐Based Approach to Scaling Infiltration and Evapotranspiration

    Directory of Open Access Journals (Sweden)

    Allen G. Hunt

    2017-02-01

    Full Text Available Optimal flow paths obtained from percolation theory provide a powerful tool that can be used to characterize properties associated with flow such as soil hydraulic conductivity, as well as other properties influenced by flow connectivity and topology. A recently proposed scaling theory for vegetation growth appeals to the tortuosity of optimal paths from percolation theory to define the spatio‐temporal scaling of the root radial extent (or, equivalently, plant height. Root radial extent measures the maximum horizontal distance between a plant shoot and the root tips. We apply here the same scaling relationship to unsteady (horizontal flow associated with plant transpiration. The pore‐scale travel time is generated from the maximum flow rate under saturated conditions and a typical pore size. At the field‐scale, the characteristic time is interpreted as the growing season duration, and the characteristic length is derived from the measured evapotranspiration in that period. We show that the two scaling results are equivalent, and they are each in accord with observed vegetation growth limits, as well as with actual limiting transpiration values. While the conceptual approach addresses transpiration, most accessed data are for evapotranspiration. The equivalence of the two scaling approaches suggests that, if horizontal flow is the dominant pathway in plant transpiration, horizontal unsteady flow follows the same scaling relationship as root growth. Then, we propose a corresponding scaling relationship to vertical infiltration, a hypothesis which is amenable to testing using infiltration results of Sharma and co‐authors. This alternate treatment of unsteady vertical flow may be an effective alternative to the commonly applied method based on the diffusion of water over a continuum as governed by Richards’ equation.

  1. Essential Unidimensionality Examination for Multicomponent Scales: An Interrelationship Decomposition Approach

    Science.gov (United States)

    Raykov, Tenko; Pohl, Steffi

    2013-01-01

    A procedure for examining essential unidimensionality in multicomponent measuring instruments is discussed. The method is based on an application of latent variable modeling and is concerned with the extent to which a common factor for all components of a given scale accounts for their correlations. The approach provides point and interval…

  2. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  3. Evidence of Large-Scale Quantization in Space Plasmas

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2013-03-01

    Full Text Available In plasmas, Debye screening structures the possible correlations between particles. We identify a phase space minimum h* in non-equilibrium space plasmas that connects the energy of particles in a Debye sphere to an equivalent wave frequency. In particular, while there is no a priori reason to expect a single value of h* across plasmas, we find a very similar value of h* ≈ (7.5 ± 2.4×10−22 J·s using four independent methods: (1 Ulysses solar wind measurements, (2 space plasmas that typically reside in stationary states out of thermal equilibrium and spanning a broad range of physical properties, (3 an entropic limit emerging from statistical mechanics, (4 waiting-time distributions of explosive events in space plasmas. Finding a quasi-constant value for the phase space minimum in a variety of different plasmas, similar to the classical Planck constant but 12 orders of magnitude larger may be revealing a new type of quantization in many plasmas and correlated systems more generally.

  4. Recovering a Basic Space from Issue Scales in R

    Directory of Open Access Journals (Sweden)

    Keith T. Poole

    2016-03-01

    Full Text Available basicspace is an R package that conducts Aldrich-McKelvey and Blackbox scaling to recover estimates of the underlying latent dimensions of issue scale data. We illustrate several applications of the package to survey data commonly used in the social sciences. Monte Carlo tests demonstrate that the procedure can recover latent dimensions and reproduce the matrix of responses at moderate levels of error and missing data.

  5. Decoupling Shoreline Behavior Over Variable Time and Space Scales

    Science.gov (United States)

    Hapke, C. J.; Plant, N. G.; Henderson, R.; Schwab, W. C.; Nelson, T. R.

    2016-12-01

    A combination of small-scale sediment transport processes and large-scale geologic, oceanographic, and morphologic processes drives shoreline change on time scales ranging from single storm events to decades. The relative importance of storm processes versus geological control on event response and long-term evolution of barrier islands is largely unknown but is important for understanding decadal-scale evolution Here, we investigate the primary controls on shoreline response along Fire Island, NY, a 50-km long barrier island, on timescales that resolve storms and decadal variations over a period of 80 yrs. An empirical orthogonal function (EOF) analysis is applied to a time series of shoreline positions to identify coherent patterns of shoreline variability that can be correlated to oceanographic or geologic framework parameters to help identify the controlling short-term or long-term processes. The analysis shows that storm response and recovery dominates the shoreline behavior on the largest spatial scales in the form of alternating episodes of shoreline retreat and advance that have a length scale of 1 km. The shoreline response to and recovery from Hurricane Sandy is included in this EOF analysis and indicates that this historic storm is not notable or distinguishable from several other large storms of the prior decade. This suggests that Fire Island is historically, and continues to be, resilient to severe storms. A secondary mode of the EOF analysis supports that the framework geology of the barrier island system, represented by known variations in inner shelf bathymetry, sediment availability, beach-shoreface morphology, and long-term rates of shoreline change, controls multi-decadal shoreline evolution. The geologic processes that control the long-term morphodynamics result in the ends of the island responding in opposite phase to the central portion. A third mode reveals an intermediate-scale pattern that persists over both long and short-term time scales

  6. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    Science.gov (United States)

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    Science.gov (United States)

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  8. A Principled Approach to the Specification of System Architectures for Space Missions

    Science.gov (United States)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  9. Conceptual design of jewellery: a space-based aesthetics approach

    Directory of Open Access Journals (Sweden)

    Tzintzi Vaia

    2017-01-01

    Full Text Available Conceptual design is a field that offers various aesthetic approaches to generation of nature-based product design concepts. Essentially, Conceptual Product Design (CPD uses similarities based on the geometrical forms and functionalities. Furthermore, the CAD-based freehand sketch is a primary conceptual tool in the early stages of the design process. The proposed Conceptual Product Design concept is dealing with jewelleries that are inspired from space. Specifically, a number of galaxy features, such as galaxy shapes, wormholes and graphical representation of planet magnetic field are used as inspirations. Those space-based design ideas at a conceptual level can lead to further opportunities for research and economic success of the jewellery industry. A number of illustrative case studies are presented and new opportunities can be derived for economic success.

  10. Asteroid Redirect Mission Concept: A Bold Approach for Utilizing Space Resources

    Science.gov (United States)

    Mazanek, Daniel D.; Merrill, Raymond G.; Brophy, John R.; Mueller, Robert P.

    2014-01-01

    The utilization of natural resources from asteroids is an idea that is older than the Space Age. The technologies are now available to transform this endeavour from an idea into reality. The Asteroid Redirect Mission (ARM) is a mission concept which includes the goal of robotically returning a small Near-Earth Asteroid (NEA) or a multi-ton boulder from a large NEA to cislunar space in the mid 2020's using an advanced Solar Electric Propulsion (SEP) vehicle and currently available technologies. The paradigm shift enabled by the ARM concept would allow in-situ resource utilization (ISRU) to be used at the human mission departure location (i.e., cislunar space) versus exclusively at the deep-space mission destination. This approach drastically reduces the barriers associated with utilizing ISRU for human deep-space missions. The successful testing of ISRU techniques and associated equipment could enable large-scale commercial ISRU operations to become a reality and enable a future space-based economy utilizing processed asteroidal materials. This paper provides an overview of the ARM concept and discusses the mission objectives, key technologies, and capabilities associated with the mission, as well as how the ARM and associated operations would benefit humanity's quest for the exploration and settlement of space.

  11. Hybrid x-space: a new approach for MPI reconstruction.

    Science.gov (United States)

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  12. Interrelations between the perception of time and space in large-scale environments.

    Science.gov (United States)

    Riemer, Martin; Hölzl, Rupert; Kleinböhl, Dieter

    2014-04-01

    Interactions between perceived temporal and spatial properties of external stimuli (e.g. duration and size) suggest common neural mechanisms underlying the perception of time and space. This conclusion, however, lacks support from studies in large-scale environments, showing that judgements on travelled distances and associated travel times are independent from each other. Here, we used a different approach to test whether the perception of travelled distances is influenced by the perception of time. Unlike previous studies, in which temporal and spatial judgements were related to the same experience of walking, we assessed time and distance perception in analogous, but separate versions of estimation and production tasks. In estimation tasks, participants estimated the duration of a presented sound (time) or the length of a travelled distance (space), and in production tasks, participants terminated a sound after a numerically specified duration (time) or covered a numerically specified distance (space). The results show systematic overestimation of time and underestimation of travelled distance, and the latter reflecting previously reported misperceptions of visual distance. Time and distance judgements were related within individuals for production, but not for estimation tasks. These results suggest that temporal information might constitute a probabilistic cue for path integration.

  13. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  14. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    Science.gov (United States)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help

  15. Predicting Space Weather Effects on Close Approach Events

    Science.gov (United States)

    Hejduk, Matthew D.; Newman, Lauri K.; Besser, Rebecca L.; Pachura, Daniel A.

    2015-01-01

    The NASA Robotic Conjunction Assessment Risk Analysis (CARA) team sends ephemeris data to the Joint Space Operations Center (JSpOC) for conjunction assessment screening against the JSpOC high accuracy catalog and then assesses risk posed to protected assets from predicted close approaches. Since most spacecraft supported by the CARA team are located in LEO orbits, atmospheric drag is the primary source of state estimate uncertainty. Drag magnitude and uncertainty is directly governed by atmospheric density and thus space weather. At present the actual effect of space weather on atmospheric density cannot be accurately predicted because most atmospheric density models are empirical in nature, which do not perform well in prediction. The Jacchia-Bowman-HASDM 2009 (JBH09) atmospheric density model used at the JSpOC employs a solar storm active compensation feature that predicts storm sizes and arrival times and thus the resulting neutral density alterations. With this feature, estimation errors can occur in either direction (i.e., over- or under-estimation of density and thus drag). Although the exact effect of a solar storm on atmospheric drag cannot be determined, one can explore the effects of JBH09 model error on conjuncting objects' trajectories to determine if a conjunction is likely to become riskier, less risky, or pass unaffected. The CARA team has constructed a Space Weather Trade-Space tool that systematically alters the drag situation for the conjuncting objects and recalculates the probability of collision for each case to determine the range of possible effects on the collision risk. In addition to a review of the theory and the particulars of the tool, the different types of observed output will be explained, along with statistics of their frequency.

  16. Nuclear disassembly time scales using space time correlations

    Energy Technology Data Exchange (ETDEWEB)

    Durand, D.; Colin, J.; Lecolley, J.F.; Meslin, C.; Aboufirassi, M.; Bougault, R.; Brou, R. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Bilwes, B.; Cosmo, F. [Strasbourg-1 Univ., 67 (France); Galin, J. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); and others

    1996-09-01

    The lifetime, {tau}, with respect to multifragmentation of highly excited nuclei is deduced from the analysis of strongly damped Pb+Au collisions at 29 MeV/u. The method is based on the study of space-time correlations induced by `proximity` effects between fragments emitted by the two primary products of the reaction and gives the time between the re-separation of the two primary products and the subsequent multifragment decay of one partner. (author). 2 refs.

  17. Rotating space elevators: Physics of celestial scale spinning strings

    Science.gov (United States)

    Knudsen, Steven; Golubović, Leonardo

    2014-11-01

    We explore classical and statistical mechanics of a novel dynamical system, the Rotating Space Elevator (RSE) (L. Golubović, S. Knudsen, EPL 86, 34001 (2009)). The RSE is a double rotating floppy string reaching extraterrestrial locations. Objects sliding along the RSE string (climbers) do not require internal engines or propulsion to be transported far away from the Earth's surface. The RSE thus solves a major problem in space elevator science, which is how to supply energy to the climbers moving along space elevator strings. The RSE can be made in various shapes that are stabilized by an approximate equilibrium between the gravitational and inertial forces acting in a double rotating frame associated with the RSE. This dynamical equilibrium is achieved by a special ("magical") form of the RSE mass line density derived in this paper. The RSE exhibits a variety of interesting dynamical phenomena explored here by numerical simulations. Thanks to its special design, the RSE exhibits everlasting double rotating motion. Under some conditions, however, we find that the RSE may undergo a morphological transition to a chaotic state reminiscent of fluctuating directed polymers in the realm of the statistical physics of strings and membranes.

  18. The deep structure of Gaussian scale space images

    NARCIS (Netherlands)

    Kuijper, Arjan

    2002-01-01

    In order to be able to deal with the discrete nature of images in a continuous way, one can use results of the mathematical field of 'distribution theory'. Under almost trivial assumptions, like 'we know nothing', one ends up with convolving the image with a Gaussian filter. In this manner scale is

  19. Mapping Playgrids for Learning across Space, Time, and Scale

    Science.gov (United States)

    Hollett, Ty; Kalir, Jeremiah H.

    2017-01-01

    In this article, we analyze the production of learner-generated playgrids. Playgrids are produced when learners knit together social media tools to participate across settings and scales, accomplish their goals, pursue interests, and make their learning more enjoyable and personally meaningful. Through case study methodology we examine how two…

  20. Autonomous space target recognition and tracking approach using star sensors based on a Kalman filter.

    Science.gov (United States)

    Ye, Tao; Zhou, Fuqiang

    2015-04-10

    When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.

  1. Small-Scale Mechanical Characterization of Space-Exposed Fluorinated Ethylene Propylene Recovered from the Hubble Space Telescope

    Science.gov (United States)

    Jones, J. S.; Sharon, J. A.; Mohammed, J.; Hemker, K. J.

    2012-01-01

    Multi-layer insulation panels from the Hubble Space Telescope have been recovered after 19.1 years of on-orbit service and micro-tensile experiments have been performed to characterize the effect of space exposure on the mechanical response of the outermost layer. This outer layer, 127 m thick fluorinated ethylene propylene with a 100 nm thick vapor deposited aluminum reflective coating, maintained significant tensile ductility but exhibited a degradation of strength that scales with severity of space exposure. This change in properties is attributed to damage from incident solar flux, atomic oxygen damage, and thermal cycling.

  2. Urban green spaces assessment approach to health, safety and environment

    Directory of Open Access Journals (Sweden)

    B. Akbari Neisiani

    2016-04-01

    Full Text Available The city is alive with dynamic systems, where parks and urban green spaces have high strategic importance which help to improve living conditions. Urban parks are used as visual landscape with so many benefits such as reducing stress, reducing air pollution and producing oxygen, creating opportunities for people to participate in physical activities, optimal environment for children and decreasing noise pollution. The importance of parks is such extent that are discussed as an indicator of urban development. Hereupon the design and maintenance of urban green spaces requires integrated management system based on international standards of health, safety and the environment. In this study, Nezami Ganjavi Park (District 6 of Tehran with the approach to integrated management systems have been analyzed. In order to identify the status of the park in terms of the requirements of the management system based on previous studies and all Tehran Municipality’s considerations, a check list has been prepared and completed by park survey and interview with green space experts. The results showed that the utility of health indicators were 92.33 % (the highest and environmental and safety indicators were 72 %, 84 % respectively. According to SWOT analysis in Nezami Ganjavi Park some of strength points are fire extinguishers, first aid box, annual testing of drinking water and important weakness is using unseparated trash bins also as an opportunities, there are some interesting factors for children and parents to spend free times. Finally, the most important threat is unsuitable park facilities for disabled.

  3. Multi-scale modeling of Earth's gravity field in space and time

    Science.gov (United States)

    Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric

    2017-05-01

    Since 2002, the GRACE mission has been providing an unprecedented view of the Earth's gravity field spatial and temporal variations. The gravity field models built from these satellite data are essential in order to study the mass redistributions within the Earth system. Often, they are modelled using spatial functions, such as spherical harmonics, averaged over a fixed time window. However, the satellite sampling naturally leads to a trade-off between the achievable spatial and temporal resolutions. In addition, the gravity variations are made of local components in space and time, reflecting the superimposition of sources. With the final aim to better estimate gravity variations related to local processes at different spatial and temporal scales, and adapt the temporal resolution of the model to its spatial resolution, we present an attempt at 4D gravity field modelling using localized functions in space and time. For that, we develop a four-dimensional wavelet basis, well localized in space and time and orthogonal in time. We then analyze the inverse problem of 4D gravity field estimation from GRACE synthetic inter-satellites potential differences along the orbit, and its regularization in a Bayesian framework, using a prior knowledge on the mass sources. We then test our approach in a simplified synthetic test setting, where only one mass source is present: hydrological mass variations over Africa during the year 2005. Applying a purely regional approach, we are able to reconstruct, regionally, the water height signal with a ≈2.5 cm accuracy at 450 km, 21 days resolution. We test the influence of the geophysical prior on this result, and conclude that it cannot explain alone the residuals between original and reconstructed mass signal. redIn contrast, an ideal test case with a perfect adequacy between the 4D basis and the synthetic data, without approximations nor regularization in solving the normal system, leads to a significantly improved reconstruction of

  4. Comparison of two Minkowski-space approaches to heavy quarkonia

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, Sofia; Biernat, Elmar P. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Li, Yang [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); College of William and Mary, Department of Physics, Williamsburg, VA (United States); Maris, Pieter; Vary, James P. [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); Pena, M.T. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Lisboa, Departamento de Fisica, Instituto Superior Tecnico, Lisbon (Portugal); Stadler, Alfred [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Evora, Departamento de Fisica, Evora (Portugal)

    2017-10-15

    In this work we compare mass spectra and decay constants obtained from two recent, independent, and fully relativistic approaches to the quarkonium bound-state problem: the Basis Light-Front Quantization approach, where light-front wave functions are naturally formulated; and, the Covariant Spectator Theory (CST), based on a reorganization of the Bethe-Salpeter equation. Even though conceptually different, both solutions are obtained in Minkowski space. Comparisons of decay constants for more than ten states of charmonium and bottomonium show favorable agreement between the two approaches as well as with experiment where available. We also apply the Brodsky-Huang-Lepage prescription to convert the CST amplitudes into functions of light-front variables. This provides an ideal opportunity to investigate the similarities and differences at the level of the wave functions. Several qualitative features are observed in remarkable agreement between the two approaches even for the rarely addressed excited states. Leading-twist distribution amplitudes as well as parton distribution functions of heavy quarkonia are also analyzed. (orig.)

  5. Cancer systems biology and modeling: microscopic scale and multiscale approaches.

    Science.gov (United States)

    Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin

    2015-02-01

    Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. MIC-Large Scale Magnetically Inflated Cable Structures for Space Power, Propulsion, Communications and Observational Applications

    Science.gov (United States)

    Powell, James; Maise, George; Rather, John

    2010-01-01

    A new approach for the erection of rigid large scale structures in space-MIC (Magnetically Inflated Cable)-is described. MIC structures are launched as a compact payload of superconducting cables and attached tethers. After reaching orbit, the superconducting cables are energized with electrical current. The magnet force interactions between the cables cause them to expand outwards into the final large structure. Various structural shapes and applications are described. The MIC structure can be a simple flat disc with a superconducting outer ring that supports a tether network holding a solar cell array, or it can form a curved mirror surface that concentrates light and focuses it on a smaller region-for example, a high flux solar array that generates electric power, a high temperature receiver that heats H2 propellant for high Isp propulsion, and a giant primary reflector for a telescope for astronomy and Earth surveillance. Linear dipole and quadrupole MIC structures are also possible. The linear quadrupole structure can be used for magnetic shielding against cosmic radiation for astronauts, for example. MIC could use lightweight YBCO superconducting HTS (High Temperature Superconductor) cables, that can operate with liquid N2 coolant at engineering current densities of ~105 amp/cm2. A 1 kilometer length of MIC cable would weigh only 3 metric tons, including superconductor, thermal insulations, coolant circuits, and refrigerator, and fit within a 3 cubic meter compact package for launch. Four potential MIC applications are described: Solar-thermal propulsion using H2 propellant, space based solar power generation for beaming power to Earth, a large space telescope, and solar electric generation for a manned lunar base. The first 3 applications use large MIC solar concentrating mirrors, while the 4th application uses a surface based array of solar cells on a magnetically levitated MIC structure to follow the sun. MIC space based mirrors can be very large and light

  7. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ∼1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  8. Space Culture: Innovative Cultural Approaches To Public Engagement With Astronomy, Space Science And Astronautics

    Science.gov (United States)

    Malina, Roger F.

    2012-01-01

    In recent years a number of cultural organizations have established ongoing programs of public engagement with astronomy, space science and astronautics. Many involve elements of citizen science initiatives, artists’ residencies in scientific laboratories and agencies, art and science festivals, and social network projects as well as more traditional exhibition venues. Recognizing these programs several agencies and organizations have established mechanisms for facilitating public engagement with astronomy and space science through cultural activities. The International Astronautics Federation has established an Technical Activities Committee for the Cultural Utilization of Space. Over the past year the NSF and NEA have organized disciplinary workshops to develop recommendations relating to art-science interaction and community building efforts. Rationales for encouraging public engagement via cultural projects range from theory of creativity, innovation and invention to cultural appropriation in the context of `socially robust science’ as advocated by Helga Nowotny of the European Research Council. Public engagement with science, as opposed to science education and outreach initiatives, require different approaches. Just as organizations have employed education professionals to lead education activities, so they must employ cultural professionals if they wish to develop public engagement projects via arts and culture. One outcome of the NSF and NEA workshops has been development of a rationale for converting STEM to STEAM by including the arts in STEM methodologies, particularly for K-12 where students can access science via arts and cultural contexts. Often these require new kinds of informal education approaches that exploit locative media, gaming platforms, artists projects and citizen science. Incorporating astronomy and space science content in art and cultural projects requires new skills in `cultural translation’ and `trans-mediation’ and new kinds

  9. Effect of Display Technology on Perceived Scale of Space.

    Science.gov (United States)

    Geuss, Michael N; Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Mohler, Betty J

    2015-11-01

    Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. © 2015, Human Factors and Ergonomics Society.

  10. Retention of memory for large-scale spaces.

    Science.gov (United States)

    Ishikawa, Toru

    2013-01-01

    This study empirically examined the retention of large-scale spatial memory, taking different types of spatial knowledge and levels of sense of direction into consideration. A total of 38 participants learned a route from a video and conducted spatial tasks immediately after learning the route and after 2 weeks or 3 months had passed. Results showed that spatial memory decayed over time, at a faster rate for the first 2-week period than for the subsequent period of up to 3 months, although it was not completely forgotten even after 3 months. The rate of forgetting differed depending on the type of knowledge, with landmark and route knowledge deteriorating at a much faster rate than survey knowledge. Sense of direction affected both the acquisition and the retention of survey knowledge. Survey knowledge by people with a good sense of direction was more accurate and decayed much less than that by people with a poor sense of direction.

  11. A multi-time scale approach to remaining useful life prediction in rolling bearing

    Science.gov (United States)

    Qian, Yuning; Yan, Ruqiang; Gao, Robert X.

    2017-01-01

    This paper presents a novel multi-time scale approach to bearing defect tracking and remaining useful life (RUL) prediction, which integrates enhanced phase space warping (PSW) with a modified Paris crack growth model. As a data-driven method, PSW describes the dynamical behavior of the bearing being tested on a fast-time scale, whereas the Paris crack growth model, as a physics-based model, characterizes the bearing's defect propagation on a slow-time scale. Theoretically, PSW constructs a tracking metric by evaluating the phase space trajectory warping of the bearing vibration data, and establishes a correlation between measurement on a fast-time scale and defect growth variables on a slow-time scale. Furthermore, PSW is enhanced by a multi-dimensional auto-regression (AR) model for improved accuracy in defect tracking. Also, the Paris crack growth model is modified by a time-piecewise algorithm for real-time RUL prediction. Case studies performed on two run-to-failure experiments indicate that the developed technique is effective in tracking the evolution of bearing defects and accurately predict the bearing RUL, thus contributing to the literature of bearing prognosis .

  12. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  13. Two-scale approach to oscillatory singularly perturbed transport equations

    CERN Document Server

    Frénod, Emmanuel

    2017-01-01

    This book presents the classical results of the two-scale convergence theory and explains – using several figures – why it works. It then shows how to use this theory to homogenize ordinary differential equations with oscillating coefficients as well as oscillatory singularly perturbed ordinary differential equations. In addition, it explores the homogenization of hyperbolic partial differential equations with oscillating coefficients and linear oscillatory singularly perturbed hyperbolic partial differential equations. Further, it introduces readers to the two-scale numerical methods that can be built from the previous approaches to solve oscillatory singularly perturbed transport equations (ODE and hyperbolic PDE) and demonstrates how they can be used efficiently. This book appeals to master’s and PhD students interested in homogenization and numerics, as well as to the Iter community.

  14. Extension of Space Food Shelf Life Through Hurdle Approach

    Science.gov (United States)

    Cooper, M. R.; Sirmons, T. A.; Froio-Blumsack, D.; Mohr, L.; Young, M.; Douglas, G. L.

    2018-01-01

    The processed and prepackaged space food system is the main source of crew nutrition, and hence central to astronaut health and performance. Unfortunately, space food quality and nutrition degrade to unacceptable levels in two to three years with current food stabilization technologies. Future exploration missions will require a food system that remains safe, acceptable and nutritious through five years of storage within vehicle resource constraints. The potential of stabilization technologies (alternative storage temperatures, processing, formulation, ingredient source, packaging, and preparation procedures), when combined in hurdle approach, to mitigate quality and nutritional degradation is being assessed. Sixteen representative foods from the International Space Station food system were chosen for production and analysis and will be evaluated initially and at one, three, and five years with potential for analysis at seven years if necessary. Analysis includes changes in color, texture, nutrition, sensory quality, and rehydration ratio when applicable. The food samples will be stored at -20 C, 4 C, and 21 C. Select food samples will also be evaluated at -80 C to determine the impacts of ultra-cold storage after one and five years. Packaging film barrier properties and mechanical integrity will be assessed before and after processing and storage. At the study conclusion, if tested hurdles are adequate, formulation, processing, and storage combinations will be uniquely identified for processed food matrices to achieve a five-year shelf life. This study will provide one of the most comprehensive investigations of long duration food stability ever completed, and the achievement of extended food system stability will have profound impacts to health and performance for spaceflight crews and for relief efforts and military applications on Earth.

  15. Module Degradation Mechanisms Studied by a Multi-Scale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter; Harvey, Steven P.; Jiang, Chun-Sheng; Gerber, Andreas; Guthrey, Harvey; Moutinho, Helio; Albin, David; To, Bobby; Tynan, Jerry; Moseley, John; Aguiar, Jeffery; Xiao, Chuanxiao; Waddle, John; Nardone, Marco

    2016-11-21

    A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.

  16. Morphological hat-transform scale spaces and their use in pattern classification

    NARCIS (Netherlands)

    Jalba, Andrei C.; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    In this paper we present a multi-scale method based on mathematical morphology which can successfully be used in pattern classification tasks. A connected operator similar to the morphological hat-transform is defined, and two scale-space representations are built. The most important features are

  17. Religion and Communication Spaces. A Semio-pragmatic Approach

    Directory of Open Access Journals (Sweden)

    Roger Odin

    2015-11-01

    Full Text Available Following the reflection initiated in his book The Spaces of Communication, Roger Odin suggests a new distinction between physical communication spaces and mental communication spaces (spaces that we have inside us. The suggestion is exemplified by three film analyses dedicated to the relationships between religion and communication.

  18. Testing The Scale-up Approach To Introductory Astronomy

    Science.gov (United States)

    Kregenow, Julia M.; Keller, L.; Rogers, M.; Romero, D.

    2008-09-01

    Ithaca College physics department has begun transforming our general education astronomy courses into hands-on, active-learning courses from the previous lecture-based format. We are using the SCALE-UP model (Student Centered Activities for Large Enrollment University Programs) pioneered at North Carolina State University. Expanding on the successes of Studio Physics (developed at RPI), which exchanges traditionally separate lecture/recitation/ laboratory sessions for one dynamic, active-learning environment for approximately 40 students, SCALE-UP extends this model to accommodate 100+ students by using large round tables creating naturally smaller groups of students. Classes meet three times per week with each class blending lecture, hands-on activities, group problem solving, and the use of student polling devices. We are testing whether this mode of teaching astronomy will lead to a better understanding of astronomy and the nature of science. Applying this approach in both the SCALE-UP classroom (90 students) and a traditional lecture classroom (45 students) in spring 2008, we report on our early results and lessons learned after one semester. We also discuss some of our lingering implementation questions and issues, such as: whether to use the same or different instructor in two parallel sections, requiring textbook reading, reading quizzes, on-line homework and activities, how much math to include, development of hands-on activities, and culling the typically overpacked intro astronomy syllabus.

  19. A Web Based Approach to Integrate Space Culture and Education

    Science.gov (United States)

    Gerla, F.

    2002-01-01

    , who can use it to prepare their lessons, retrieve information and organize the didactic material in order to support their lessons. We think it important to use a user centered "psychology" based on UM: we have to know the needs and expectations of the students. Our intent is to use usability tests not just to prove the site effectiveness and clearness, but also to investigate aesthetical preferences of children and young people. Physics, mathematics, chemistry are just some of the difficult learning fields connected with space technologies. Space culture is a potentially never-ending field, and our scope will be to lead students by hand in this universe of knowledge. This paper will present MARS activities in the framework of the above methodologies aimed at implementing a web based approach to integrate space culture and education. The activities are already in progress and some results will be presented in the final paper.

  20. Novel Approaches to Cellular Transplantation from the US Space Program

    Science.gov (United States)

    Pellis, Neal R.; Homick, Jerry L. (Technical Monitor)

    1999-01-01

    Research in the treatment of type I diabetes is entering a new era that takes advantage of our knowledge in an ever increasing variety of scientific disciplines. Some may originate from very diverse sources, one of which is the Space Program at National Aeronautics and Space Administration (NASA). The Space Program contributes to diabetes-related research in several treatment modalities. As an ongoing effort for medical monitoring of personnel involved in space exploration activities NASA and the extramural scientific community investigate strategies for noninvasive estimation of blood glucose levels. Part of the effort in the space protein crystal growth program is high-resolution structural analysis insulin as a means to better understand the interaction with its receptor and with host immune components and as a basis for rational design of a "better" insulin molecule. The Space Program is also developing laser technology for potential early cataract detection as well as a noninvasive analyses for addressing preclinical diabetic retinopathy. Finally, NASA developed an exciting cell culture system that affords some unique advantages in the propagation and maintenance of mammalian cells in vitro. The cell culture system was originally designed to maintain cell suspensions with a minimum of hydrodynamic and mechanical sheer while awaiting launch into microgravity. Currently the commercially available NASA bioreactor (Synthecon, Inc., Houston, TX) is used as a research tool in basic and applied cell biology. In recent years there is continued strong interest in cellular transplantation as treatment for type I diabetes. The advantages are the potential for successful long-term amelioration and a minimum risk for morbidity in the event of rejection of the transplanted cells. The pathway to successful application of this strategy is accompanied by several substantial hurdles: (1) isolation and propagation of a suitable uniform donor cell population; (2) management of

  1. An Open and Holistic Approach for Geo and Space Sciences

    Science.gov (United States)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Toshihiko, Iyemori; Yatagai, Akiyo; Koyama, Yukinobu; Murayama, Yasuhiro; King, Todd; Hughes, Steve; Fung, Shing; Galkin, Ivan; Hapgood, Mike; Belehaki, Anna

    2016-04-01

    Geo and space sciences thus far have been very successful, even often an open, cross-domain and holistic approach did not play an essential role. But this situation is changing rapidly. The research focus is shifting into more complex, non-linear and multi-domain specified phenomena, such as e.g. climate change or space environment. This kind of phenomena only can be understood step by step using the holistic idea. So, what is necessary for a successful cross-domain and holistic approach in geo and space sciences? Research and science in general become more and more dependent from a rich fundus of multi-domain data sources, related context information and the use of highly advanced technologies in data processing. Such buzzword phrases as Big Data and Deep Learning are reflecting this development. Big Data also addresses the real exponential growing of data and information produced by measurements or simulations. Deep Learning technology may help to detect new patterns and relationships in data describing high sophisticated natural phenomena. And further on, we should not forget science and humanities are only two sides of the same medal in the continuing human process of knowledge discovery. The concept of Open Data or in particular the open access to scientific data is addressing the free and open availability of -at least publicly founded and generated- data. The open availability of data covers the free use, reuse and redistribution of data which have been established with the formation of World Data Centers already more than 50 years ago. So, we should not forget, the foundation for open data is the responsibility of the individual scientist up until the big science institutions and organizations for a sustainable management of data. Other challenges are discovering and collecting the appropriate data, and preferably all of them or at least the majority of the right data. Therefore a network of individual or even better institutional catalog-based and at least

  2. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  3. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Directory of Open Access Journals (Sweden)

    Alex N Tidd

    Full Text Available The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential

  4. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Science.gov (United States)

    Tidd, Alex N; Vermard, Youen; Marchal, Paul; Pinnegar, John; Blanchard, Julia L; Milner-Gulland, E J

    2015-01-01

    The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential foundation for future

  5. Log-polar mapping-based scale space tracking with adaptive target response

    Science.gov (United States)

    Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing

    2017-05-01

    Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.

  6. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Analysis of PFG Anomalous Diffusion via Real-Space and Phase-Space Approaches

    Directory of Open Access Journals (Sweden)

    Guoxing Lin

    2018-01-01

    Full Text Available Pulsed-field gradient (PFG diffusion experiments can be used to measure anomalous diffusion in many polymer or biological systems. However, it is still complicated to analyze PFG anomalous diffusion, particularly the finite gradient pulse width (FGPW effect. In practical applications, the FGPW effect may not be neglected, such as in clinical diffusion magnetic resonance imaging (MRI. Here, two significantly different methods are proposed to analyze PFG anomalous diffusion: the effective phase-shift diffusion equation (EPSDE method and a method based on observing the signal intensity at the origin. The EPSDE method describes the phase evolution in virtual phase space, while the method to observe the signal intensity at the origin describes the magnetization evolution in real space. However, these two approaches give the same general PFG signal attenuation including the FGPW effect, which can be numerically evaluated by a direct integration method. The direct integration method is fast and without overflow. It is a convenient numerical evaluation method for Mittag-Leffler function-type PFG signal attenuation. The methods here provide a clear view of spin evolution under a field gradient, and their results will help the analysis of PFG anomalous diffusion.

  8. A fast Laplace solver approach to pore scale permeability

    Science.gov (United States)

    Arns, Christoph; Adler, Pierre

    2017-04-01

    The permeability of a porous medium can be derived by solving the Stokes equations in the pore space with no slip at the walls. The resulting velocity averaged over the pore volume yields the permeability KS by application of the Darcy law. The Stokes equations can be solved by a number of different techniques such as finite differences, finite volume, Lattice Boltzmann, but whatever the technique it remains a heavy task since there are four unknowns at each node (the three velocity components and the pressure) which necessitate the solution of four equations (the projection of Newton's law on each axis and mass conservation). By comparison, the Laplace equation is scalar with a single unknown at each node. The objective of this work is to replace the Stokes equations by an elliptical equation with a space dependent permeability. More precisely, the local permeability k is supposed to be proportional to (r-alpha)**2 where r is the distance of the voxel to the closest wall, and alpha a constant; k is zero in the solid phase. The elliptical equation is div(k gradp)=0. A macroscopic pressure gradient is assumed to be exerted on the medium and again the resulting velocity averaged over space yields a permeability K_L. In order to validate this method, systematic calculations have been performed. First, elementary shapes (plane channel, circular pipe, rectangular channels) were studied for which flow occurs along parallel lines in which case KL is the arithmetic average of the k's. KL was calculated for various discretizations of the pore space and various values of alpha. For alpha=0.5, the agreement with the exact analytical value of KS is excellent for the plane and rectangular channels while it is only approximate for circular pipes. Second, the permeability KL of channels with sinusoidal walls was calculated and compared with analytical results and numerical ones provided by a Lattice Boltzmann algorithm. Generally speaking, the discrepancy does not exceed 25% when

  9. Properties of small-scale interfacial turbulence from a novel thermography based approach

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales

  10. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  11. Estimating Brazilian Monthly GDP: a State-Space Approach

    Directory of Open Access Journals (Sweden)

    João Victor Issler

    2016-03-01

    Full Text Available This paper has several contributions. The first is to employ a superior interpo lation method that enables to estimate, nowcastandforecast monthly Brazilian GDP for 1980-2012 in an integrated way-see Bernanke, Gertler, & Watson (1997[Systematic monetary policy and the effects of oil price shocks(Brookings Papers in Economic Activity No.1]. Second, along the spirit of Mariano & Murasawa (2003 [A new coincident index of business cycles based on monthly and quarterly series.Journal of Applied Econometrics, 18(4, 427-443], we propose and test a myriad of interpolation models and interpolation auxiliary series-all coincident with GDP from a business-cycle dating point of view. Based on these results, we finally choose the most appropriate monthly indicator for Brazilian GDP. Third, this monthly GDP estimate is compared to an economic ac tivity indicator widely used by practitioners in Brazil-the Brazilian Economic Activity Index (IBC-Br. We found that our monthly GDP tracks economic ac tivity better than IBC-Br. This happens by construction, since our state-space approach imposes the restriction (discipline that our monthly estimate must add up to the quarterly observed series in any given quarter, which may not hold regarding IBC-Br. Moreover, our method has the advantage to be easily im plemented: it only requires conditioning on two observed series for estimation, while estimating IBC-Br requires the availability of hundreds of monthly series. Third, in a nowcasting and forecasting exercise, we illustrate the advantages of our integrated approach. Finally, we compare the chronology of recessions of our monthly estimate with those done elsewhere.

  12. Next Generation Space Interconnect Standard (NGSIS): a modular open standards approach for high performance interconnects for space

    Science.gov (United States)

    Collier, Charles Patrick

    2017-04-01

    The Next Generation Space Interconnect Standard (NGSIS) effort is a Government-Industry collaboration effort to define a set of standards for interconnects between space system components with the goal of cost effectively removing bandwidth as a constraint for future space systems. The NGSIS team has selected the ANSI/VITA 65 OpenVPXTM standard family for the physical baseline. The RapidIO protocol has been selected as the basis for the digital data transport. The NGSIS standards are developed to provide sufficient flexibility to enable users to implement a variety of system configurations, while meeting goals for interoperability and robustness for space. The NGSIS approach and effort represents a radical departure from past approaches to achieve a Modular Open System Architecture (MOSA) for space systems and serves as an exemplar for the civil, commercial, and military Space communities as well as a broader high reliability terrestrial market.

  13. Analysis of Life Histories: A State Space Approach

    Directory of Open Access Journals (Sweden)

    Rajulton, Fernando

    2001-01-01

    Full Text Available EnglishThe computer package LIFEHIST written by the author, is meant for analyzinglife histories through a state-space approach. Basic ideas on which the various programs have beenbuilt are described in this paper in a non-mathematical language. Users can use various programs formultistate analyses based on Markov and semi-Markov frameworks and sequences of transitions implied inlife histories. The package is under constant revision and programs for using a few specific modelsthe author thinks will be useful for analyzing longitudinal data will be incorporated in the nearfuture.FrenchLe système d'ordinateur LIFEHIST écrit par l'auteur est établi pour analyser desévénements au cours de la vie par une approche qui tient compte des états aucours du temps. Les idées fondamentales à la base des divers programmes dumodule sont décrites dans un langage non-mathématique. Le systèmeLIFEHIST peut être utilisé pour des analyses Markov et semi-Markov desséquences d’événements au cours de la vie. Le module est sous révisionconstante, et des programmes que l’auteur compte ajouter pour l'usage dedonnées longitudinales sont décrit.

  14. Towards synthetic biological approaches to resource utilization on space missions.

    Science.gov (United States)

    Menezes, Amor A; Cumbers, John; Hogan, John A; Arkin, Adam P

    2015-01-06

    This paper demonstrates the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned missions to explore the Moon and Mars. Compared with anticipated non-biological approaches, it is determined that for 916 day Martian missions: 205 days of high-quality methane and oxygen Mars bioproduction with Methanobacterium thermoautotrophicum can reduce the mass of a Martian fuel-manufacture plant by 56%; 496 days of biomass generation with Arthrospira platensis and Arthrospira maxima on Mars can decrease the shipped wet-food mixed-menu mass for a Mars stay and a one-way voyage by 38%; 202 days of Mars polyhydroxybutyrate synthesis with Cupriavidus necator can lower the shipped mass to three-dimensional print a 120 m(3) six-person habitat by 85% and a few days of acetaminophen production with engineered Synechocystis sp. PCC 6803 can completely replenish expired or irradiated stocks of the pharmaceutical, thereby providing independence from unmanned resupply spacecraft that take up to 210 days to arrive. Analogous outcomes are included for lunar missions. Because of the benign assumptions involved, the results provide a glimpse of the intriguing potential of 'space synthetic biology', and help focus related efforts for immediate, near-term impact.

  15. On the Field Theoretical Approach to the Anomalous Scaling in Turbulence

    CERN Document Server

    Runov, A V

    1999-01-01

    Anomalous scaling problem in the stochastic Navier-Stokes equation is treated in the framework of the field theoretical approach, successfully applied earlier to the Kraichnan rapid advection model. Two cases of the space dimensions d=2 and d->infinity, which allow essential simplification of the calculations, are analysed. The presence of infinite set of the Galilean invariant composite operators with negative critical dimensions in the model discussed has been proved. It allows, as well as for the Kraichnan model, to justify the anomalous scaling of the structure functions. The explicit expression for the junior operator of this set, related to the square of energy dissipation operator, has been found in the first order of the epsilon-expansion. Its critical dimension is strongly negative in the two dimensional case and vanishes while d->infinity.

  16. Wikipedia information flow analysis reveals the scale-free architecture of the semantic space.

    Directory of Open Access Journals (Sweden)

    Adolfo Paolo Masucci

    Full Text Available In this paper we extract the topology of the semantic space in its encyclopedic acception, measuring the semantic flow between the different entries of the largest modern encyclopedia, Wikipedia, and thus creating a directed complex network of semantic flows. Notably at the percolation threshold the semantic space is characterised by scale-free behaviour at different levels of complexity and this relates the semantic space to a wide range of biological, social and linguistics phenomena. In particular we find that the cluster size distribution, representing the size of different semantic areas, is scale-free. Moreover the topology of the resulting semantic space is scale-free in the connectivity distribution and displays small-world properties. However its statistical properties do not allow a classical interpretation via a generative model based on a simple multiplicative process. After giving a detailed description and interpretation of the topological properties of the semantic space, we introduce a stochastic model of content-based network, based on a copy and mutation algorithm and on the Heaps' law, that is able to capture the main statistical properties of the analysed semantic space, including the Zipf's law for the word frequency distribution.

  17. Wikipedia information flow analysis reveals the scale-free architecture of the semantic space.

    Science.gov (United States)

    Masucci, Adolfo Paolo; Kalampokis, Alkiviadis; Eguíluz, Victor Martínez; Hernández-García, Emilio

    2011-02-28

    In this paper we extract the topology of the semantic space in its encyclopedic acception, measuring the semantic flow between the different entries of the largest modern encyclopedia, Wikipedia, and thus creating a directed complex network of semantic flows. Notably at the percolation threshold the semantic space is characterised by scale-free behaviour at different levels of complexity and this relates the semantic space to a wide range of biological, social and linguistics phenomena. In particular we find that the cluster size distribution, representing the size of different semantic areas, is scale-free. Moreover the topology of the resulting semantic space is scale-free in the connectivity distribution and displays small-world properties. However its statistical properties do not allow a classical interpretation via a generative model based on a simple multiplicative process. After giving a detailed description and interpretation of the topological properties of the semantic space, we introduce a stochastic model of content-based network, based on a copy and mutation algorithm and on the Heaps' law, that is able to capture the main statistical properties of the analysed semantic space, including the Zipf's law for the word frequency distribution.

  18. Scaling up: A guide to high throughput genomic approaches for biodiversity analysis.

    Science.gov (United States)

    Porter, Teresita M; Hajibabaei, Mehrdad

    2018-01-02

    The purpose of this review is to present the most common and emerging DNA-based methods used to generate data for biodiversity and biomonitoring studies. Since environmental assessment and monitoring programs may require biodiversity information at multiple levels, we pay particular attention to the DNA metabarcoding method and discuss a number of bioinformatic tools and considerations for producing DNA-based indicators using operational taxonomic units (OTUs), taxa at a variety of ranks, and community composition. By developing the capacity to harness the advantages provided by the newest technologies, investigators can 'scale-up' by increasing the number of samples and replicates processed, the frequency of sampling over time and space, and even the depth of sampling such as by sequencing more reads per sample or more markers per sample. The ability to scale-up is made possible by the reduced hands-on time and cost per sample provided by the newest kits, platforms, and software tools. Results gleaned from broad-scale monitoring will provide an opportunity to address key scientific questions linked to biodiversity and its dynamics across time and space as well as being more relevant for policy makers, enabling science-based decision making, and provide a greater socio-economic impact. Since genomic approaches are continually evolving, we provide this guide to methods used in biodiversity genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  20. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  1. Visualising higher-dimensional space-time and space-scale objects as projections to ℝ3

    Directory of Open Access Journals (Sweden)

    Ken Arroyo Ohori

    2017-07-01

    Full Text Available Objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. For instance, a single 4D object can be used to represent the changes in a 3D object’s shape across time or all its optimal representations at various levels of detail. In this paper, we look at how such higher-dimensional space-time and space-scale objects can be visualised as projections from ℝ4 to ℝ3. We present three projections that we believe are particularly intuitive for this purpose: (i a simple ‘long axis’ projection that puts 3D objects side by side; (ii the well-known orthographic and perspective projections; and (iii a projection to a 3-sphere (S3 followed by a stereographic projection to ℝ3, which results in an inwards-outwards fourth axis. Our focus is in using these projections from ℝ4 to ℝ3, but they are formulated from ℝn to ℝn−1 so as to be easily extensible and to incorporate other non-spatial characteristics. We present a prototype interactive visualiser that applies these projections from 4D to 3D in real-time using the programmable pipeline and compute shaders of the Metal graphics API.

  2. On the scaling of functional spaces, from smart cities to cloud computing

    OpenAIRE

    Burgess, Mark

    2016-01-01

    The study of spacetime, and its role in understanding functional systems has received little attention in information science. Recent work, on the origin of universal scaling in cities and biological systems, provides an intriguing insight into the functional use of space, and its measurable effects. Cities are large information systems, with many similarities to other technological infrastructures, so the results shed new light indirectly on the scaling the expected behaviour of smart pervas...

  3. A risk-based approach to flammable gas detector spacing.

    Science.gov (United States)

    Defriend, Stephen; Dejmek, Mark; Porter, Leisa; Deshotels, Bob; Natvig, Bernt

    2008-11-15

    Flammable gas detectors allow an operating company to address leaks before they become serious, by automatically alarming and by initiating isolation and safe venting. Without effective gas detection, there is very limited defense against a flammable gas leak developing into a fire or explosion that could cause loss of life or escalate to cascading failures of nearby vessels, piping, and equipment. While it is commonly recognized that some gas detectors are needed in a process plant containing flammable gas or volatile liquids, there is usually a question of how many are needed. The areas that need protection can be determined by dispersion modeling from potential leak sites. Within the areas that must be protected, the spacing of detectors (or alternatively, number of detectors) should be based on risk. Detector design can be characterized by spacing criteria, which is convenient for design - or alternatively by number of detectors, which is convenient for cost reporting. The factors that influence the risk are site-specific, including process conditions, chemical composition, number of potential leak sites, piping design standards, arrangement of plant equipment and structures, design of isolation and depressurization systems, and frequency of detector testing. Site-specific factors such as those just mentioned affect the size of flammable gas cloud that must be detected (within a specified probability) by the gas detection system. A probability of detection must be specified that gives a design with a tolerable risk of fires and explosions. To determine the optimum spacing of detectors, it is important to consider the probability that a detector will fail at some time and be inoperative until replaced or repaired. A cost-effective approach is based on the combined risk from a representative selection of leakage scenarios, rather than a worst-case evaluation. This means that probability and severity of leak consequences must be evaluated together. In marine and

  4. Space-Wise approach for airborne gravity data modelling

    Science.gov (United States)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  5. Navigating in small-scale space: the role of landmarks and resource monitoring in understanding saddleback tamarin travel.

    Science.gov (United States)

    Garber, Paul A; Porter, Leila M

    2014-05-01

    Recent studies of spatial memory in wild nonhuman primates indicate that foragers may rely on a combination of navigational strategies to locate nearby and distant feeding sites. When traveling in large-scale space, tamarins are reported to encode spatial information in the form of a route-based map. However, little is known concerning how wild tamarins navigate in small-scale space (between feeding sites located at a distance of ≤60 m). Therefore, we collected data on range use, diet, and the angle and distance traveled to visit sequential feeding sites in the same group of habituated Bolivian saddleback tamarins (Saguinus fuscicollis weddelli) in 2009 and 2011. For 7-8 hr a day for 54 observation days, we recorded the location of the study group at 10 min intervals using a GPS unit. We then used GIS software to map and analyze the monkeys' movements and travel paths taken between feeding sites. Our results indicate that in small-scale space the tamarins relied on multiple spatial strategies. In 31% of cases travel was route-based. In the remaining 69% of cases, however, the tamarins appeared to attend to the spatial positions of one or more near-to-site landmarks to relocate feeding sites. In doing so they approached the same feeding site from a mean of 4.5 different directions, frequently utilized different arboreal pathways, and traveled approximately 30% longer than then the straight-line distance. In addition, the monkeys' use of non-direct travel paths allowed them to monitor insect and fruit availability in areas within close proximity of currently used food patches. We conclude that the use of an integrated spatial strategy (route-based travel and attention to near-to-goal landmarks) provides tamarins with the opportunity to relocate productive feeding sites as well as monitor the availability of nearby resources in small-scale space. © 2013 Wiley Periodicals, Inc.

  6. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States)

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de

  7. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    Science.gov (United States)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  8. Biodiversity conservation in agriculture requires a multi-scale approach.

    Science.gov (United States)

    Gonthier, David J; Ennis, Katherine K; Farinas, Serge; Hsieh, Hsun-Yi; Iverson, Aaron L; Batáry, Péter; Rudolphi, Jörgen; Tscharntke, Teja; Cardinale, Bradley J; Perfecto, Ivette

    2014-09-22

    Biodiversity loss--one of the most prominent forms of modern environmental change--has been heavily driven by terrestrial habitat loss and, in particular, the spread and intensification of agriculture. Expanding agricultural land-use has led to the search for strong conservation strategies, with some suggesting that biodiversity conservation in agriculture is best maximized by reducing local management intensity, such as fertilizer and pesticide application. Others highlight the importance of landscape-level approaches that incorporate natural or semi-natural areas in landscapes surrounding farms. Here, we show that both of these practices are valuable to the conservation of biodiversity, and that either local or landscape factors can be most crucial to conservation planning depending on which types of organisms one wishes to save. We performed a quantitative review of 266 observations taken from 31 studies that compared the impacts of localized (within farm) management strategies and landscape complexity (around farms) on the richness and abundance of plant, invertebrate and vertebrate species in agro-ecosystems. While both factors significantly impacted species richness, the richness of sessile plants increased with less-intensive local management, but did not significantly respond to landscape complexity. By contrast, the richness of mobile vertebrates increased with landscape complexity, but did not significantly increase with less-intensive local management. Invertebrate richness and abundance responded to both factors. Our analyses point to clear differences in how various groups of organisms respond to differing scales of management, and suggest that preservation of multiple taxonomic groups will require multiple scales of conservation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. A scale-space curvature matching algorithm for the reconstruction of complex proximal humeral fractures.

    Science.gov (United States)

    Vlachopoulos, Lazaros; Székely, Gábor; Gerber, Christian; Fürnstahl, Philipp

    2018-01-01

    The optimal surgical treatment of complex fractures of the proximal humerus is controversial. It is proven that best results are obtained if an anatomical reduction of the fragments is achieved and, therefore, computer-assisted methods have been proposed for the reconstruction of the fractures. However, complex fractures of the proximal humerus are commonly accompanied with a relevant displacement of the fragments and, therefore, algorithms relying on the initial position of the fragments might fail. The state-of-the-art algorithm for complex fractures of the proximal humerus requires the acquisition of a CT scan of the (healthy) contralateral anatomy as a reconstruction template to address the displacement of the fragments. Pose-invariant fracture line based reconstruction algorithms have been applied successful for reassembling broken vessels in archaeology. Nevertheless, the extraction of the fracture lines and the necessary computation of their curvature are susceptible to noise and make the application of previous approaches difficult or even impossible for bone fractures close to the joints, where the cortical layer is thin. We present a novel scale-space representation of the curvature, permitting to calculate the correct alignment between bone fragments solely based on corresponding regions of the fracture lines. The fractures of the proximal humerus are automatically reconstructed based on iterative pairwise reduction of the fragments. The validation of the presented method was performed on twelve clinical cases, surgically treated after complex proximal humeral fracture, and by cadaver experiments. The accuracy of our approach was compared to the state-of-the-art algorithm for complex fractures of the proximal humerus. All reconstructions of the clinical cases resulted in an accurate approximation of the pre-traumatic anatomy. The accuracy of the reconstructed cadaver cases outperformed the current state-of-the-art algorithm. Copyright © 2017 Elsevier B

  10. Visions for Space Exploration: ILS Issues and Approaches

    Science.gov (United States)

    Watson, Kevin

    2005-01-01

    This viewgraph presentation reviews some of the logistic issues that the Vision for Space Exploration will entail. There is a review of the vision and the timeline for the return to the moon that will lead to the first human exploration of Mars. The lessons learned from the International Space Station (ISS) and other such missions are also reviewed.

  11. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    S.C.C. Blom (Stefan); B. Lisser (Bert); J.C. van de Pol (Jaco); M. Weber (Michael); J.C. van de Pol (Jaco)

    2007-01-01

    textabstract\\begin{abstract} We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes.

  12. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  13. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  14. A Database Approach to Distributed State-Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Cerna, I.; Haverkort, Boudewijn R.H.M.; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2009-01-01

    We study distributed state-space generation on a cluster of workstations. It is explained why state-space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive data types. Our solution is to introduce a database

  15. The rigged Hilbert space approach to the Gamow states

    OpenAIRE

    de la Madrid, Rafael

    2012-01-01

    We use the resonances of the spherical shell potential to present a thorough description of the Gamow (quasinormal) states within the rigged Hilbert space. It will be concluded that the natural setting for the Gamow states is a rigged Hilbert space whose test functions fall off at infinity faster than Gaussians.

  16. Geometric approach to evolution problems in metric spaces

    NARCIS (Netherlands)

    Stojković, Igor

    2011-01-01

    This PhD thesis contains four chapters where research material is presented. In the second chapter the extension of the product formulas for semigroups induced by convex functionals, from the classical Hilbert space setting to the setting of general CAT(0) spaces. In the third chapter, the

  17. Evaluating public space pedestrian accessibility: a GIS approach

    NARCIS (Netherlands)

    Morar, T.; Bertolini, L.; Radoslav, R.

    2013-01-01

    Public spaces are sources of quality of life in neighborhoods. Seeking to help professionals and municipalities assess how well a public space can be used by the community it serves, this paper presents a GIS-based methodology for evaluating its pedestrian accessibility. The Romanian city of

  18. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation.

    Science.gov (United States)

    Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap

    2013-04-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.

  19. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation

    NARCIS (Netherlands)

    Sprengers, Andre M. J.; Caan, Matthan W. A.; Moerman, Kevin M.; Nederveen, Aart J.; Lamerichs, Rolf M.; Stoker, Jaap

    2013-01-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises

  20. Application of Linear Scale Space and the Spatial Color Model in Microscopy

    NARCIS (Netherlands)

    van Osta, P.; Verdonck, K.; Bols, L.; Geysen, J.; Geusebroek, J.M.; ter Haar Romeny, B.

    2002-01-01

    Structural features and color are used in human vision to distinguish features in light micorscopy. Taking these structural features and color into consideration in machine vision often enables a more robust segmentation than based on intensity tresholding. Linear scale space theory and the spatial

  1. Scale Space Methods for Analysis of Type 2 Diabetes Patients' Blood Glucose Values

    Directory of Open Access Journals (Sweden)

    Stein Olav Skrøvseth

    2011-01-01

    Full Text Available We describe how scale space methods can be used for quantitative analysis of blood glucose concentrations from type 2 diabetes patients. Blood glucose values were recorded voluntarily by the patients over one full year as part of a self-management process, where the time and frequency of the recordings are decided by the patients. This makes a unique dataset in its extent, though with a large variation in reliability of the recordings. Scale space and frequency space techniques are suited to reveal important features of unevenly sampled data, and useful for identifying medically relevant features for use both by patients as part of their self-management process, and provide useful information for physicians.

  2. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  3. Space Station Freedom - Approaching the critical design phase

    Science.gov (United States)

    Kohrs, Richard H.; Huckins, Earle, III

    1992-01-01

    The status and future developments of the Space Station Freedom are discussed. To date detailed design drawings are being produced to manufacture SSF hardware. A critical design review (CDR) for the man-tended capability configuration is planned to be performed in 1993 under the SSF program. The main objective of the CDR is to enable the program to make a full commitment to proceed to manufacture parts and assemblies. NASA recently signed a contract with the Russian space company, NPO Energia, to evaluate potential applications of various Russian space hardware for on-going NASA programs.

  4. Phase space picture of quantum mechanics group theoretical approach

    CERN Document Server

    Kim, Y S

    1991-01-01

    This book covers the theory and applications of the Wigner phase space distribution function and its symmetry properties. The book explains why the phase space picture of quantum mechanics is needed, in addition to the conventional Schrödinger or Heisenberg picture. It is shown that the uncertainty relation can be represented more accurately in this picture. In addition, the phase space picture is shown to be the natural representation of quantum mechanics for modern optics and relativistic quantum mechanics of extended objects.

  5. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  6. Deterring and Dissuading in Space: A Systems Approach

    National Research Council Canada - National Science Library

    Fox, Scott M

    2008-01-01

    Space capabilities have improved life in the United States and around the world, enhanced security, protected lives and the environment, sped information flow, served as an engine for economic growth...

  7. The XML approach to implementing space link extension service management

    Science.gov (United States)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  8. A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data

    Science.gov (United States)

    Wu, Xiaoping; Kusche, Jürgen; Landerer, Felix W.

    2017-06-01

    Geocentre motion between the centre-of-mass of the Earth system and the centre-of-figure of the solid Earth surface is a critical signature of degree-1 components of global surface mass transport process that includes sea level rise, ice mass imbalance and continental-scale hydrological change. To complement GRACE data for complete-spectrum mass transport monitoring, geocentre motion needs to be measured accurately. However, current methods of geodetic translational approach and global inversions of various combinations of geodetic deformation, simulated ocean bottom pressure and GRACE data contain substantial biases and systematic errors. Here, we demonstrate a new and more reliable unified approach to geocentre motion determination using a recently formed satellite laser ranging based geocentric displacement time-series of an expanded geodetic network of all four space geodetic techniques and GRACE gravity data. The unified approach exploits both translational and deformational signatures of the displacement data, while the addition of GRACE's near global coverage significantly reduces biases found in the translational approach and spectral aliasing errors in the inversion.

  9. Dimensional reduction in momentum space and scale-invariant cosmological fluctuations

    Science.gov (United States)

    Amelino-Camelia, Giovanni; Arzano, Michele; Gubitosi, Giulia; Magueijo, João

    2013-11-01

    We adopt a framework where quantum gravity’s dynamical dimensional reduction of spacetime at short distances is described in terms of modified dispersion relations. We observe that by subjecting such models to a momentum-space diffeomorphism one obtains a “dual picture” with unmodified dispersion relations, but a modified measure of integration over momenta. We then find that the UV Hausdorff dimension of momentum space which can be inferred from this modified integration measure coincides with the short-distance spectral dimension of spacetime. This result sheds light into why scale-invariant fluctuations are obtained if the original model for two UV spectral dimensions is combined with Einstein gravity. By studying the properties of the inner product we derive the result that it is only in two energy-momentum dimensions that microphysical vacuum fluctuations are scale invariant. This is true ignoring gravity, but then we find that if Einstein gravity is postulated in the original frame, in the dual picture gravity switches off, since all matter becomes conformally coupled. We argue that our findings imply that the following concepts are closely connected: scale invariance of vacuum quantum fluctuations, conformal invariance of the gravitational coupling, UV reduction to spectral dimension two in position space, and UV reduction to Hausdorff dimension two in energy-momentum space.

  10. Docking optimization, variance and promiscuity for large-scale drug-like chemical space using high performance computing architectures.

    Science.gov (United States)

    Trager, Richard E; Giblock, Paul; Soltani, Sherwin; Upadhyay, Amit A; Rekapalli, Bhanu; Peterson, Yuri K

    2016-10-01

    There is a continuing need to hasten and improve protein-ligand docking to facilitate the next generation of drug discovery. As the drug-like chemical space reaches into the billions of molecules, increasingly powerful computer systems are required to probe, as well as tackle, the software engineering challenges needed to adapt existing docking programs to use next-generation massively parallel processing systems. We demonstrate docking setup using the wrapper code approach to optimize the DOCK program for large-scale computation as well as docking analysis using variance and promiscuity as examples. Wrappers provide faster docking speeds when compared with the naive multi-threading system MPI-DOCK, making future endeavors in large-scale docking more feasible; in addition, eliminating highly variant or promiscuous compounds will make databases more useful. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A new approach to the analysis of the phase space of f(R)-gravity

    CERN Document Server

    Carloni, Sante

    2015-01-01

    We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models.

  12. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System.

    Science.gov (United States)

    Choi, Eunah; Lee, Sangyoon; Hong, Hyunki

    2017-07-21

    Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component) regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG) or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  13. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System

    Directory of Open Access Journals (Sweden)

    Eunah Choi

    2017-07-01

    Full Text Available Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  14. Fractal electrodynamics via non-integer dimensional space approach

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2015-09-25

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested. - Highlights: • Electrodynamics of fractal media is described by non-integer dimensional spaces. • Applications of the fractal Gauss's and Ampere's laws are suggested. • Fractal Poisson equation, equation for fractal stream of charges are considered.

  15. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    Science.gov (United States)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  16. Lie–Hamilton systems on curved spaces: a geometrical approach

    Science.gov (United States)

    Herranz, Francisco J.; de Lucas, Javier; Tobolski, Mariusz

    2017-12-01

    A Lie–Hamilton system is a nonautonomous system of first-order ordinary differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional Lie algebra, a Vessiot–Guldberg Lie algebra, of Hamiltonian vector fields relative to a Poisson structure. Its general solution can be written as an autonomous function, the superposition rule, of a generic finite family of particular solutions and a set of constants. We pioneer the study of Lie–Hamilton systems on Riemannian spaces (sphere, Euclidean and hyperbolic plane), pseudo-Riemannian spaces (anti-de Sitter, de Sitter, and Minkowski spacetimes) as well as on semi-Riemannian spaces (Newtonian spacetimes). Their corresponding constants of motion and superposition rules are obtained explicitly in a geometric way. This work extends the (graded) contraction of Lie algebras to a contraction procedure for Lie algebras of vector fields, Hamiltonian functions, and related symplectic structures, invariants, and superposition rules.

  17. Correlations and clustering in a scale-free network in Euclidean space

    Indian Academy of Sciences (India)

    Internet has with other networks is that a large part of Internet is embedded in. Euclidean space by the .... network, it always gets a connection to the existing local nodes of the Internet net- work. In fact one would ..... [6] R Pastor-Satorras and A Vespigniani, Evolution and structure of internet: A statis- tical physics approach ...

  18. Stochastic dynamics of large-scale inflation in de Sitter space

    Science.gov (United States)

    Buryak, O. E.

    1996-02-01

    In this paper we derive exact quantum Langevin equations for stochastic dynamics of large-scale inflation in de Sitter space. These quantum Langevin equations are the equivalent of the Wigner equation and are described by a system of stochastic differential equations. We present a formula for the calculation of the expectation value of a quantum operator whose Weyl symbol is a function of the large-scale inflation scalar field and its time derivative. The quantum expectation value is calculated as a mathematical expectation value over a stochastic process in an extended phase space, where the additional coordinate plays the role of a stochastic phase. The unique solution is obtained for the Cauchy problem for the Wigner equation for large-scale inflation. The stationary solution for the Wigner equation is found for an arbitrary potential. It is shown that the large-scale inflation scalar field in de Sitter space behaves as a quantum one-dimensional dissipative system, which supports the earlier results of Graziani and of Nakao, Nambu, and Sasaki. But the analogy with a one-dimensional model of the quantum linearly damped anharmonic oscillator is not complete: the difference arises from the new time-dependent commutation relation for the large-scale field and its time derivative. It is found that, for the large-scale inflation scalar field, the large time asymptotics is equal to the ``classical limit.'' For the large time limit the quantum Langevin equations are just the classical stochastic Langevin equations (only the stationary state is defined by the quantum field theory).

  19. Combining local scaling and global methods to detect soil pore space

    Science.gov (United States)

    Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.

    2017-04-01

    The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity

  20. Quantitative approach to measuring the cerebrospinal fluid space with CT

    Energy Technology Data Exchange (ETDEWEB)

    Zeumer, H.; Hacke, W.; Hartwich, P.

    1982-01-01

    A method for measuring the subarachnoid space by using an independent CT evaluation unit is described. The normal values have been calculated for patients, according to age, and three examples are presented demonstrating reversible decrease of brain volume in patients suffering anorexia nervosa and chronic alcoholism.

  1. A Conceptual Approach for Optimising Bus Stop Spacing

    Science.gov (United States)

    Johar, Amita; Jain, S. S.; Garg, P. k.

    2017-06-01

    An efficient public transportation system is essential of any country. The growth, development and shape of the urban areas are mainly due to availability of good transportation (Shah et al. in Inst Town Plan India J 5(3):50-59, 1). In developing countries, like India, travel by local bus in a city is very common. The accidents, congestion, pollution and appropriate location of bus stops are the major problems arising in metropolitan cities. Among all the metropolitan cities in India, Delhi has highest percentage of growth of population and vehicles. Therefore, it is important to adopt efficient and effective ways to improve mobility in different metropolitan cities in order to overcome the problem and to reduce the number of private vehicles on the road. The primary objective of this paper is to present a methodology for developing a model for optimum bus stop spacing (OBSS). It describes the evaluation of existing urban bus route, data collection, development of model for optimizing urban bus route and application of model. In this work, the bus passenger generalized cost method is used to optimize the spacing between bus stops. For the development of model, a computer program is required to be written. The applicability of the model has been evaluated by taking the data of urban bus route of Delhi Transport Corporation (DTC) in Excel sheet in first phase. Later on, it is proposed to develop a programming in C++ language. The developed model is expected to be useful to transport planner for rational design of the spacing of bus stops to save travel time and to generalize operating cost. After analysis it is found that spacing between the bus stop comes out to be between 250 and 500 m. The Proposed Spacing of bus stops is done considering the points that they don't come nearer to metro/rail station, entry or exit of flyover and near traffic signal.

  2. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading......The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...

  3. Quantum scaling in many-body systems an approach to quantum phase transitions

    CERN Document Server

    Continentino, Mucio

    2017-01-01

    Quantum phase transitions are strongly relevant in a number of fields, ranging from condensed matter to cold atom physics and quantum field theory. This book, now in its second edition, approaches the problem of quantum phase transitions from a new and unifying perspective. Topics addressed include the concepts of scale and time invariance and their significance for quantum criticality, as well as brand new chapters on superfluid and superconductor quantum critical points, and quantum first order transitions. The renormalisation group in real and momentum space is also established as the proper language to describe the behaviour of systems close to a quantum phase transition. These phenomena introduce a number of theoretical challenges which are of major importance for driving new experiments. Being strongly motivated and oriented towards understanding experimental results, this is an excellent text for graduates, as well as theorists, experimentalists and those with an interest in quantum criticality.

  4. An Inquiry-Based Approach to Teaching Space Weather to Undergraduate Non-Science Majors

    Science.gov (United States)

    Cade, W. B., III

    2016-12-01

    Undergraduate Space Weather education is an important component of creating a society that is knowledgeable about space weather and its societal impacts. The space physics community has made great strides in providing academic education for students, typically physics and engineering majors, who are interested in pursuing a career in the space sciences or space weather. What is rarely addressed, however, is providing a broader space weather education to undergraduate students as a whole. To help address this gap, I have created an introductory space weather course for non-science majors, with the idea of expanding exposure to space weather beyond the typical physics and engineering students. The philosophy and methodologies used in this course will be presented, as well as the results of the first attempts to teach it. Using an approach more tailored to the non-scientist, courses such as this can be an effective means of broadening space weather education and outreach.

  5. Beyond consistency test of gravity with redshift-space distortions at quasilinear scales

    OpenAIRE

    Taruya, Atsushi; Koyama, Kazuya; Hiramatsu, Takashi; Oka, Akira

    2014-01-01

    Redshift-space distortions (RSDs) offer an attractive method to measure the growth of cosmic structure on large scales, and combining with the measurement of the cosmic expansion history, they can be used as cosmological tests of gravity. With the advent of future galaxy redshift surveys aiming at precisely measuring the RSD, an accurate modeling of RSD going beyond linear theory is a critical issue in order to detect or disprove small deviations from general relativity (GR). While several im...

  6. Persistence Lenses: Segmentation, Simplification, Vectorization, Scale Space and Fractal Analysis of Images

    OpenAIRE

    Brooks, Martin

    2016-01-01

    A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result fro...

  7. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  8. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering.

    Science.gov (United States)

    Hadwiger, Markus; Al-Awami, Ali K; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2018-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  9. Understanding scaling through history-dependent processes with collapsing sample space.

    Science.gov (United States)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  10. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    Science.gov (United States)

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  11. Embodied Space: a Sensorial Approach to Spatial Experience

    Science.gov (United States)

    Durão, Maria João

    2009-03-01

    A reflection is presented on the significance of the role of the body in the interpretation and future creation of spatial living structures. The paper draws on the body as cartography of sensorial meaning that includes vision, touch, smell, hearing, orientation and movement to discuss possible relationships with psychological and sociological parameters of 'sensorial space'. The complex dynamics of body-space is further explored from the standpoint of perceptual variables such as color, light, materialities, texture and their connections with design, technology, culture and symbology. Finally, the paper discusses the integration of knowledge and experimentation in the design of future habitats where body-sensitive frameworks encompass flexibility, communication, interaction and cognitive-driven solutions.

  12. A multi-spacecraft formation approach to space debris surveillance

    Science.gov (United States)

    Felicetti, Leonard; Emami, M. Reza

    2016-10-01

    This paper proposes a new mission concept devoted to the identification and tracking of space debris through observations made by multiple spacecraft. Specifically, a formation of spacecraft has been designed taking into account the characteristics and requirements of the utilized optical sensors as well as the constraints imposed by sun illumination and visibility conditions. The debris observations are then shared among the team of spacecraft, and processed onboard of a ;hosting leader; to estimate the debris motion by means of Kalman filtering techniques. The primary contribution of this paper resides on the application of a distributed coordination architecture, which provides an autonomous and robust ability to dynamically form spacecraft teams once the target has been detected, and to dynamically build a processing network for the orbit determination of space debris. The team performance, in terms of accuracy, readiness and number of the detected objects, is discussed through numerical simulations.

  13. The STEREO Mission: A New Approach to Space Weather Research

    Science.gov (United States)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  14. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  15. A Random Finite Set Approach to Space Junk Tracking and Identification

    Science.gov (United States)

    2014-09-03

    Final 3. DATES COVERED (From - To) 31 Jan 13 – 29 Apr 14 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and...01-2013 to 29-04-2014 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and Identification 5a. CONTRACT NUMBER FA2386-13...Prescribed by ANSI Std Z39-18 A Random Finite Set Approach to Space Junk Tracking and Indentification Ba-Ngu Vo1, Ba-Tuong Vo1, 1Department of

  16. Discrete phase-space approach to mutually orthogonal Latin squares

    Science.gov (United States)

    Gaeta, Mario; Di Matteo, Olivia; Klimov, Andrei B.; de Guise, Hubert

    2014-10-01

    We show there is a natural connection between Latin squares and commutative sets of monomials defining geometric structures in finite phase-space of prime power dimensions. A complete set of such monomials defines a mutually unbiased basis (MUB) and may be associated with a complete set of mutually orthogonal Latin squares (MOLS). We translate some possible operations on the monomial sets into isomorphisms of Latin squares, and find a general form of permutations that map between Latin squares corresponding to unitarily equivalent mutually unbiased sets.

  17. EXPERIMENTAL STUDIES ON DIFFICULTY OF EVACUATION FROM UNDERGROUND SPACES UNDER INUNDATED SITUATIONS USING REAL SCALE MODELS

    Science.gov (United States)

    Baba, Yasuyuki; Ishigaki, Taisuke; Toda, Keiichi; Nakagawa, Hajime

    Many urbanized cities in Japan are located in alluvial plains, and the vulnerability of urbanized areas to flood disaster is highlighted by flood attacks due to heavy rain fall or typhoons. Underground spaces located in the urbanized area are flood-prone areas, and the intrusion of flood watar into underground space inflicted severe damages on urban functions and infrastructures. In a similar way, low-lying areas like "bowl-shaped" depression and underpasses under highway and railroad bridges are also prone to floods. The underpasses are common sites of accidents of submerged vehicles, and severe damage including human damage occasionally occurs under flooding conditions. To reduce the damage due to inundation in underground space, needless to say, early evacuation is one of the most important countermeasures. This paper shows some experimental results of evacuation tests from underground spaces under inundated situations. The difficulities of the evacuation from underground space has been investigated by using real scale models (door, staircase and vehicle), and the limit for safety evacuation is discussed. From the results, it is found that water depth of 0.3 - 0.4m would be a critical situation for the evacuation from underground space through staircases and door and that 0.7 - 0.8m deep on the ground would be also a critical situation for safety evacuation though the doors of the vehicle. These criteria have some possibility to vary according to different inundated situations, and they are also influenced by the individual variation like the difference of physical strength. This means that these criteria requires cautious stance to use although they show a sort of an index of the limitation for saftty evacuation from underground space.

  18. Advance Approach to Concept and Design Studies for Space Missions

    Science.gov (United States)

    Deutsch, M.; Nichols, J.

    1999-01-01

    Recent automated and advanced techniques developed at JPL have created a streamlined and fast-track approach to initial mission conceptualization and system architecture design, answering the need for rapid turnaround of trade studies for potential proposers, as well as mission and instrument study groups.

  19. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    OpenAIRE

    Arne Riekstiņš

    2011-01-01

    When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involv...

  20. Scaling production and improving efficiency in DEA: an interactive approach

    Science.gov (United States)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  1. Advanced free space optics (FSO) a systems approach

    CERN Document Server

    Majumdar, Arun K

    2015-01-01

    This book provides a comprehensive, unified tutorial covering the most recent advances in the technology of free-space optics (FSO). It is an all-inclusive source of information on the fundamentals of FSO as well as up-to-date information on the state-of-the-art in technologies available today. This text is intended for graduate students, and will also be useful for research scientists and engineers with an interest in the field. FSO communication is a practical solution for creating a three dimensional global broadband communications grid, offering bandwidths far beyond what is possible in the Radio Frequency (RF) range. However, the attributes of atmospheric turbulence and scattering impose perennial limitations on availability and reliability of FSO links. From a systems point-of-view, this groundbreaking book provides a thorough understanding of channel behavior, which can be used to design and evaluate optimum transmission techniques that operate under realistic atmospheric conditions. Topics addressed...

  2. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    Science.gov (United States)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  3. Biased Tracers in Redshift Space in the EFT of Large-Scale Structure

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.

    2016-10-28

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.

  4. A Novel Approach of Sensitive Infrared Signal Detection for Space Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose an innovative approach to overcome the infrared signal detection difficulties.  In this investigation, a Periodical Poled MgO Lithium Niobate...

  5. An Approach to Distributed State Space Exploration for Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Petrucci, Laure

    2004-01-01

    We present an approach and associated computer tool support for conducting distributed state space exploration for Coloured Petri Nets (CPNs). The distributed state space exploration is based on the introduction of a coordinating process and a number of worker processes. The worker processes...... Tools. This makes the distributed state space exploration and analysis largely transparent to the analyst. We illustrate the use of the developed tool on an example....

  6. Green spaces – a key resources for urban sustainability. The GreenKeys approach for developing green spaces

    Directory of Open Access Journals (Sweden)

    Carlos Smaniotto Costa

    2008-01-01

    Full Text Available Following the principles of sustainable development all urban development programmes set down ideas and ideals to develop our cities in a more long-lasting way, especially concerning the environment and the social dimension. These programmes result from a process of searching for and then offering a better quality of life. Green space assumes a key role for improving the quality of urban life - not only because of their ecological and environmental functions but also of their relevance for public health, societal well-being and economic benefits they can provide. Urban green spaces are therefore key resources for sustainable cities. Starting with this principle the GreenKeys Project developed a methodology to approach better green space qualities. Then if we are unable to demonstrate effectively the value that green spaces deliver we risk green spaces remaining on the periphery of public policy priorities and unable to attract more resources. The article presents and discusses the results and outcomes of the GreenKeys Project. Especially the GreenKeys proposal for supporting the formulation of an Urban Green Space Strategy is widely discussed.

  7. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  8. Robust scale-space filter using second-order partial differential equations.

    Science.gov (United States)

    Ham, Bumsub; Min, Dongbo; Sohn, Kwanghoon

    2012-09-01

    This paper describes a robust scale-space filter that adaptively changes the amount of flux according to the local topology of the neighborhood. In a manner similar to modeling heat or temperature flow in physics, the robust scale-space filter is derived by coupling Fick's law with a generalized continuity equation in which the source or sink is modeled via a specific heat capacity. The filter plays an essential part in two aspects. First, an evolution step size is adaptively scaled according to the local structure, enabling the proposed filter to be numerically stable. Second, the influence of outliers is reduced by adaptively compensating for the incoming flux. We show that classical diffusion methods represent special cases of the proposed filter. By analyzing the stability condition of the proposed filter, we also verify that its evolution step size in an explicit scheme is larger than that of the diffusion methods. The proposed filter also satisfies the maximum principle in the same manner as the diffusion. Our experimental results show that the proposed filter is less sensitive to the evolution step size, as well as more robust to various outliers, such as Gaussian noise, impulsive noise, or a combination of the two.

  9. A structure-based distance metric for high-dimensional space exploration with multidimensional scaling.

    Science.gov (United States)

    Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus

    2014-03-01

    Although the euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging intercluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multidimensional scaling (MDS) where one can often observe nonintuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our biscale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate euclidean distance.

  10. Hybrid phase-space-Fock-space approach to evolution of a driven nonlinear resonator

    Science.gov (United States)

    Khezri, Mostafa; Korotkov, Alexander N.

    2017-10-01

    We analyze the quantum evolution of a weakly nonlinear resonator due to a classical near-resonant drive and damping. The resonator nonlinearity leads to squeezing and heating of the resonator state. Using a hybrid phase-space-Fock-space representation for the resonator state within the Gaussian approximation, we derive evolution equations for the four parameters characterizing the Gaussian state. Numerical solution of these four ordinary differential equations is much simpler and faster than simulation of the full density matrix evolution, while providing good accuracy for the system analysis during transients and in the steady state. We show that steady-state squeezing of the resonator state is limited by 3 dB; however, this limit can be exceeded during transients.

  11. An embedded boundary approach for the simulation of precipitation and dissolution in sediments at the pore scale

    Science.gov (United States)

    Shen, C.; Molins, S.; Trebotich, D.; Steefel, C.

    2011-12-01

    Precipitation (or dissolution) of mineral grains modifies the geometry of the pore space in subsurface sediment with evolving solid-liquid boundaries. In turn, changes in the pore space alter the groundwater flow through the sediment, which ultimately affects the continuum scale reaction rates that are relevant for field applications such as carbon sequestration. Modeling provides a unique tool to understand and quantify the feedback processes between mineral precipitation (or dissolution) and flow at the pore scale. However, for modeling to accurately resolve the flow and reactive transport dynamics at the micrometer length scale in real porous media sediments, a method capable of representing complex solid-fluid and fluid-fluid boundaries in a high performance simulation framework is necessary. Here we present a modeling approach coupling flow and transport at the pore scale with multicomponent geochemistry that utilizes the embedded boundary method to characterize fluid-solid interfaces. The development is based on an adaptive, parallelized flow and transport software package, Chombo, and the geochemical code, CrunchFlow, providing powerful simulation capabilities. We demonstrate the approach in simulation of calcite dissolution in complex pore structures that are reconstructed from synchrotron-based x-ray computed microtomography (CMT) images. We apply high resolution techniques to track sharp gradients of concentrations that typically drive precipitation and dissolution reactions. We show that the approach is consistent with that used for moving fluid-fluid interfaces, and thus providing a robust and algorithmically consistent methodology that can be applied in multiphase flow problems. We use the model to examine the inter-dependence between continuum-scale dissolution/precipitation rates and flow patterns at the pore scale in different porous media geometries by using volume averaging methods.

  12. Mapping social values for urban green spaces using Public Participation GIS: the influence of spatial scale and implications for landscape planning.

    Science.gov (United States)

    Ives, Christopher

    2015-04-01

    Measuring social values for landscapes is an emerging field of research and is critical to the successful management of urban ecosystems. Green open space planning has traditionally relied on rigid standards and metrics without considering the physical requirements of green spaces that are valued for different reasons and by different people. Relating social landscape values to key environmental variables provides a much stronger evidence base for planning landscapes that are both socially desirable and environmentally sustainable. This study spatially quantified residents' values for green space in the Lower Hunter Valley of New South Wales, Australia by enabling participants to mark their values for specific open spaces on interactive paper maps. The survey instrument was designed to evaluate the effect of spatial scale by providing maps of residents' local area at both suburb and municipality scales. The importance of open space values differed depending on whether they were indicated via marker dots or reported on in a general aspatial sense. This suggests that certain open space functions were inadequately provided for in the local area (specifically, cultural significance and health/therapeutic value). Additionally, all value types recorded a greater abundance of marker dots at the finer (suburb) scale compared to the coarser (municipality) scale, but this pattern was more pronounced for some values than others (e.g. physical exercise value). Finally, significant relationships were observed between the abundance of value marker dots in parks and their environmental characteristics (e.g. percentage of vegetation). These results have interesting implications when considering the compatibility between different functions of green spaces and how planners can incorporate information about social values with more traditional approaches to green space planning.

  13. A Simulation Modeling Approach for Optimization of Storage Space Allocation in Container Terminal

    OpenAIRE

    Said, Gamal Abd El-Nasser A.; El-Horbaty, El-Sayed M.

    2015-01-01

    Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space a...

  14. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  15. Quantitative approach to small-scale nonequilibrium systems

    DEFF Research Database (Denmark)

    Dreyer, Jakob K; Berg-Sørensen, Kirstine; Oddershede, Lene B

    2006-01-01

    In a nano-scale system out of thermodynamic equilibrium, it is important to account for thermal fluctuations. Typically, the thermal noise contributes fluctuations, e.g., of distances that are substantial in comparison to the size of the system and typical distances measured. If the thermal fluct...... method to obtain knowledge about the underlying energy landscape from a set of experimental measurements. Finally, we perform estimates of the error made if thermal fluctuations are ignored....

  16. An Alternative Surface Measures Construction in Finite-Dimensional Spaces and its Consistency with the Classical Approach

    Directory of Open Access Journals (Sweden)

    Kateryna V. Moravetska

    2017-09-01

    Conclusions. The construction of surface measures suggested for infinite-dimensional spaces is a generalization of the classical approach in finite-dimensional spaces. Therefore further investigation of the described approach seems to be reasonable.

  17. An approach towards problem-based learning in virtual space.

    Science.gov (United States)

    Freudenberg, Lutz S; Bockisch, Andreas; Beyer, Thomas

    2010-01-01

    Problem-based learning (PBL) is an established and efficient approach to sustainable teaching. Here, we describe translation of PBL into the virtual classroom thereby offering novel teaching aspects in the field of Nuclear Medicine. Our teaching approach is implemented on a "moodle" platform and consists of 2 modules: complementary seminar teaching materials and a virtual PBL-classroom, which can be attended via Skype.Over the course of 4 semesters 539 students have accessed our teaching platform. 21 students have participated in the PBL seminar (module 2). After resolving some minor technical difficulties our virtual seminars have evolved into a forum of intense studies, whereby the participating students have learned to become more independent along the workup of the teaching cases. This was reflected in the results of the intra-group presentations and discussions.Quantitative and qualitative evaluation of our moodle-based PBL platform indicates an increasing level of acceptance and enthusiasm by the students. This has initiated discussions about opening our PBL concept to a wider audience within the university and beyond the Nuclear Medicine specialty.

  18. A General Systems Theory for Chaos, Quantum Mechanics and Gravity for Dynamical Systems of all Space-Time Scales

    OpenAIRE

    Selvam, A. M.

    2005-01-01

    Non-local connections, i. e. long-range space-time correlations intrinsic to the observed subatomic dynamics of quantum systems is also exhibited by macro-scale dynamical systems as selfsimilar fractal space-time fluctuations and is identified as self-organized criticality. The author has developed a general systems theory for the observed self-organized criticality applicable to dynamical systems of all space-time scales based on the concept that spatial integration of enclosed small-scale f...

  19. Coordination between Subway and Urban Space: A Networked Approach

    Directory of Open Access Journals (Sweden)

    Lei Mao

    2014-05-01

    Full Text Available This paper selects Changsha as a case study and constructs the models of the subway network and the urban spatial network by using planning data. In the network models, the districts of Changsha are regarded as nodes and the connections between each pair of districts are regarded as edges. The method is based on quantitative analysis of the node weights and the edge weights, which are defined in the complex network theory. And the structures of subway and urban space are visualized in the form of networks. Then, through analyzing the discrepancy coefficients of the corresponding nodes and edges, the paper carries out a comparison between the two networks to evaluate the coordination. The results indicate that only 21.4% of districts and 13.2% of district connections have a rational coordination. Finally, the strategies are put forward for optimization, which suggest adjusting subway transit density, regulating land-use intensity and planning new mass transits for the uncoordinated parts.

  20. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  1. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    Science.gov (United States)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios

  2. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  3. Time asynchronous relative dimension in space method for multi-scale problems in fluid dynamics

    Science.gov (United States)

    Markesteijn, A. P.; Karabasov, S. A.

    2014-02-01

    A novel computational method is presented for solving fluid dynamics equations in the multi-scale framework when the system size is an important parameter of the governing equations. The method (TARDIS) is based on a concurrent transformation of the governing equations in space and time and solving the transformed equations on a uniform Cartesian grid with the corresponding causality conditions at the grid interfaces. For implementation in the framework of TARDIS, the second-order CABARET scheme of Karabasov and Goloviznin [1] is selected for it provides a good combination of numerical accuracy, computational efficiency and simplicity of realisation. Numerical examples are first provided for several isothermal gas dynamics test problems and then for modelling of molecular fluctuations inside a microscopic flow channel and ultrasound wave propagation through a nano-scale region of molecular fluctuations.

  4. Cosmological perturbations in inflationary models with anisotropic space-time scaling in a Lifshitz background

    Science.gov (United States)

    Alishahiha, Mohsen; Firouzjahi, Hassan; Koyama, Kazuya; Namjoo, Mohammad Hossein

    2013-11-01

    Models of inflation in a gravitational background with an anisotropic space-time scaling are studied. The background is a higher-dimensional Lifshitz throat with the anisotropy scaling z≠1. After the dimensional reduction, the four-dimensional general covariance is explicitly broken to a three-dimensional spatial diffeomorphism. As a result the cosmological perturbation theory in this setup with less symmetries have to be formulated. We present the consistent cosmological perturbation theory for this setup. We find that the effective four-dimensional gravitational wave perturbations propagate with a different speed than the higher dimensional gravitational excitations. Depending on the model parameters, for an observer inside the throat, the four-dimensional gravitational wave propagation can be superluminal. We also find that the Bardeen potential and the Newtonian potential are different. This can have interesting observational consequences for lensing and cosmic microwave background fluctuations. Furthermore, we show that at the linearized level the inflaton field excitations vanish.

  5. A tailored approach to electromagnetic compatibility requirements in space applications

    Science.gov (United States)

    Javor, Ken; Nave, Mark

    1991-01-01

    An approach is outlined which defines the requirements for electromagnetic compatibility (EMC) between NASA and military technologies with attention given to electromagnetic interference (EMI) requirements. In order to minimize the cost and weight impact of the changes needed for compatibility the plan emphasizes the incorporation of off-the-shelf technology with current nonstandard methods. NASA designs are structured to meet EMI requirements rather than processing waivers against military-type specifications. The NASA-wide EMI requirements can be documented in three sections: requirements, test methods, and tailoring guidelines. It is shown that a NASA-wide EMC specification would decrease the costs of achieving compatibility by increasing efficiency and optimizing the relationship between EMC design and performance and cost.

  6. A forest-based feature screening approach for large-scale genome data with complex structures.

    Science.gov (United States)

    Wang, Gang; Fu, Guifang; Corcoran, Christopher

    2015-12-23

    Genome-wide association studies (GWAS) interrogate large-scale whole genome to characterize the complex genetic architecture for biomedical traits. When the number of SNPs dramatically increases to half million but the sample size is still limited to thousands, the traditional p-value based statistical approaches suffer from unprecedented limitations. Feature screening has proved to be an effective and powerful approach to handle ultrahigh dimensional data statistically, yet it has not received much attention in GWAS. Feature screening reduces the feature space from millions to hundreds by removing non-informative noise. However, the univariate measures used to rank features are mainly based on individual effect without considering the mutual interactions with other features. In this article, we explore the performance of a random forest (RF) based feature screening procedure to emphasize the SNPs that have complex effects for a continuous phenotype. Both simulation and real data analysis are conducted to examine the power of the forest-based feature screening. We compare it with five other popular feature screening approaches via simulation and conclude that RF can serve as a decent feature screening tool to accommodate complex genetic effects such as nonlinear, interactive, correlative, and joint effects. Unlike the traditional p-value based Manhattan plot, we use the Permutation Variable Importance Measure (PVIM) to display the relative significance and believe that it will provide as much useful information as the traditional plot. Most complex traits are found to be regulated by epistatic and polygenic variants. The forest-based feature screening is proven to be an efficient, easily implemented, and accurate approach to cope whole genome data with complex structures. Our explorations should add to a growing body of enlargement of feature screening better serving the demands of contemporary genome data.

  7. Scaling Task Management in Space and Time: Reducing User Overhead in Ubiquitous-Computing Environments

    Science.gov (United States)

    2005-03-28

    limitation of this approach is that it does not easily scale to large numbers of tasks over extended periods. Busy users may intermittently touch on...RETSINA framework, with applications in domains such as financial portfolio management, ecommerce and military logistics [88]; and more recently Carnegie...complex tasks. Examples can be found in the workflow modeling of business processes, and in some agent-based systems, where the description of the

  8. Real-Space Renormalization-Group Approach to the Integer Quantum Hall Effect

    Science.gov (United States)

    Cain, Philipp; Römer, Rudolf A.

    We review recent results based on an application of the real-space renormalization group (RG) approach to a network model for the integer quantum Hall (QH) transition. We demonstrate that this RG approach reproduces the critical distribution of the power transmission coefficients, i.e., two-terminal conductances, Pc(G), with very high accuracy. The RG flow of P(G) at energies away from the transition yields a value of the critical exponent ν that agrees with most accurate large-size lattice simulations. A description of how to obtain other relevant transport coefficients such as RL and RH is given. From the non-trivial fixed point of the RG flow we extract the critical level-spacing distribution (LSD). This distribution is close, but distinctively different from the earlier large-scale simulations. We find that the LSD obeys scaling behavior around the QH transition with ν = 2.37±0.02. Away from the transition it crosses over towards the Poisson distribution. We next investigate the plateau-to-insulator transition at strong magnetic fields. For a fully quantum coherent situation, we find a quantized Hall insulator with RH≈h/e2 up to RL 20h/e2 when interpreting the results in terms of most probable value of the distribution function P(RH). Upon further increasing RL→∞, the Hall insulator with diverging Hall resistance R H∝ R Lκ is seen. The crossover between these two regimes depends on the precise nature of the averaging procedure for the distributions P(RL) and P(RH). We also study the effect of long-ranged inhomogeneities on the critical properties of the QH transition. Inhomogeneities are modeled by a smooth random potential with a correlator which falls off with distance as a power law r-α. Similar to the classical percolation, we observe an enhancement of ν with decreasing α. These results exemplify the surprising fact that a small RG unit, containing only five nodes, accurately captures most of the correlations responsible for the localization

  9. An Adaptive Control for a Free-Floating Space Robot by Using Inverted Chain Approach

    OpenAIRE

    Abiko, Satoko; Hirzinger, Gerd

    2007-01-01

    In this chapter, we proposed an adaptive control for a free-floating space robot by using the inverted chain approach, which is a unique formulation for a space robot compared with that for a ground-based manipulator system. This gives the explicit description of the coupled dynamics between the end-effector and the robot arm, and provides the advantage of linearity with respect to the inertial parameters for the operational space formulation. In a free-floating space robot, the dynamic param...

  10. Random Matrix Approach to Fluctuations and Scaling in Complex Systems

    Science.gov (United States)

    Santhanam, M. S.

    The study of fluctuations, self-similarity and scaling in physical and socioeconomic sciences in the last several years has brought in new insights and new ideas for modelling them. For instance, one of the important empirical results of the market dynamics is that the probability distribution of price returns r in a typical market displays a power-law, i.e, (P|r| > x) ˜ r -α , where α ˜ 3.0 [1]. In fact, this "inverse cube law" is known to hold good for volume of stocks traded in stock exchanges, though the exponent in this case is α ˜ 1.5 [1]. Similar power laws appear for the cumulative frequency distribution of earth quake magnitudes, often called the Gutenberg-Richter relation [2]. Infect, anything from size distribution of cities and wealth distributions, display power law. These apparently universal power laws pertain to the distribution of actual values taken by some quantity of interest, say, a stock market index and these distributions reveal scaling with certain parameters.

  11. A multi-scale approach to simulate the Galaxy

    Science.gov (United States)

    Portegies Zwart, S.

    2013-01-01

    Simulating an entire galaxy, like the Milky Way, is complicated by the enormous range in spatial and temporal scales, but also by the complexity in the wide variety of physics such a simulation will have to cover. Even after solving these numerical difficulties, the Galaxy's enormous scale requires fantastic computer resources. These can in principle be acquired if the simulation can be distributed and efficiently ported to a wide range of architectures. Computationally demanding tasks can then be off-loaded to distant supercomputers whereas embarrassingly parallel tasks can be taken care of by a bunch of PCs or graphical processing units. We designed the Astronomical MUltipurpose Software Environment (AMUSE) to enable such complex simulations. AMUSE is a general-purpose framework for interconnecting scientific simulation programs using a homogeneous, unified software interface. The framework is intrinsically parallel, supports distributed resources and conveniently separates all components in memory. It performs unit conversion between different modules automatically and defines common data structures to communicate across different codes.

  12. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  13. Direct Time-Domain-Based Approach for Study of Space-Vector Pulsewidth Modulation

    DEFF Research Database (Denmark)

    Oleschuk, V.; Blaabjerg, Frede; Stankovic, A.M.

    2005-01-01

    Direct time-do main-based approach, which is characterized by the simplicity and clarity, is proposed for the study and design of space-vector based methods of pulsewidth modulation (PWM) for standard voltage source inverters for adjustable speed motor drives. This approach is based on the detailed...

  14. A Multi-scale Approach to Urban Thermal Analysis

    Science.gov (United States)

    Gluch, Renne; Quattrochi, Dale A.

    2005-01-01

    An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.

  15. Beyond consistency test of gravity with redshift-space distortions at quasilinear scales

    Science.gov (United States)

    Taruya, Atsushi; Koyama, Kazuya; Hiramatsu, Takashi; Oka, Akira

    2014-02-01

    Redshift-space distortions (RSDs) offer an attractive method to measure the growth of cosmic structure on large scales, and combining with the measurement of the cosmic expansion history, they can be used as cosmological tests of gravity. With the advent of future galaxy redshift surveys aiming at precisely measuring the RSD, an accurate modeling of RSD going beyond linear theory is a critical issue in order to detect or disprove small deviations from general relativity (GR). While several improved models of RSD have been recently proposed based on the perturbation theory (PT), the framework of these models heavily relies on GR. Here, we put forward a new PT prescription for RSD in general modified gravity models. As a specific application, we present theoretical predictions of the redshift-space power spectra in the f(R) gravity model, and compare them with N-body simulations. Using the PT template that takes into account the effects of both modifications of gravity and RSD properly, we successfully recover the fiducial model parameter in N-body simulations in an unbiased way. On the other hand, we found it difficult to detect the scale dependence of the growth rate in a model-independent way based on GR templates.

  16. a Strained Space-Time to Explain the Large Scale Properties of the Universe

    Science.gov (United States)

    Tartaglia, Angelo

    2011-06-01

    Space-time can be treated as a four-dimensional material continuum. The corresponding generally curved manifold can be thought of as having been obtained, by continuous deformation, from a flat four-dimensional Euclidean manifold. In a three-dimensional ordinary situation such a deformation process would lead to strain in the manifold. Strain in turn may be read as half the difference between the actual metric tensor and the Euclidean metric tensor of the initial unstrained manifold. On the other side we know that an ordinary material would react to the attempt to introduce strain giving rise to internal stresses and one would have correspondingly a deformation energy term. Assuming the conditions of linear elasticity hold, the deformation energy is easily written in terms of the strain tensor. The Einstein-Hilbert action is generalized to include the new deformation energy term. The new action for space-time has been applied to a Friedmann-Lemaître-Robertson-Walker universe filled with dust and radiation. The accelerated expansion is recovered, then the theory has been put through four cosmological tests: primordial isotopic abundances from Big Bang Nucleosynthesis; Acoustic Scale of the CMB; Large Scale Structure formation; luminosity/redshift relation for type Ia supernovae. The result is satisfying and has allowed to evaluate the parameters of the theory.

  17. Tether pointing platform and space elevator mechanisms analysis of the key concepts for SATP and scaled SATP

    Science.gov (United States)

    Turci, E.

    1986-01-01

    The key concepts for a scaled and full model Science and Applications Tethered Platform (SATP) are analysized. This includes a tether pointing platform and a space elevator. The mechanism concepts and technological solutions are given. The idea of the tether pointing platform mechanism is to control and stabilize the attitude of a platform by means of a movable tether. The idea of the space elevator mechanism for a scaled SATP is to drag the tether gripping it between two rotating wheels.

  18. The Development of Large Scale Cosmic Structure:. A Theoretician's Approach

    Science.gov (United States)

    Ostriker, Jeremiah P.

    2001-03-01

    The study of cosmology, the origin, nature and future evolution of structure in the universe, has been totally transformed in the last decade, and computers have played a major role in the change. New theories have arisen from particle physics which make the subject, formerly almost a branch of philosophy, into quantitative science. Initial, semi-quantitative tests of these theories, either using data on galaxy distributions in the local universe or the cosmic background radiation fluctuations reaching us from the distant universe, indicate rough agreement with the simplest predictions of the theories. But now that fully three-dimensional, time-dependent numerical simulations can be made on modern, parallel architecture computers, we can examine (using good physical modelling) the detailed quantitative predictions of the various theories that have been proposed to see which, if any, can produce an output consistent with the real world being revealed to us by the latest ground-and space-based instruments. Using these tools, we have been able to reduce to a small number the currently viable options for the correct cosmological theory. At present, the most acceptable model, passing all presently applied tests, is the low density but flat Cold Dark Matter model with Ωbaryon = 0.04, ΩCDM = 0.26, ΩLambda = 0.70. The nature of the "cosmological constant" presents a challenging problem of physics.

  19. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    We calculate the redshift space correlation function and the power spectrum of density peaks of a Gaussian random field. Our derivation, which is valid on linear scales k≲0.1hMpc-1, is based on the peak biasing relation given by Desjacques [Phys. Rev. DPRVDAQ1550-7998, 78, 103503 (2008)10.1103/PhysRevD.78.103503]. In linear theory, the redshift space power spectrum is Ppks(k,μ)=exp⁡(-f2σvel2k2μ2)[bpk(k)+bvel(k)fμ2]2Pδ(k), where μ is the angle with respect to the line of sight, σvel is the one-dimensional velocity dispersion, f is the growth rate, and bpk(k) and bvel(k) are k-dependent linear spatial and velocity bias factors. For peaks, the value of σvel depends upon the functional form of bvel. When the k dependence is absent from the square brackets and bvel is set to unity, the resulting expression is assumed to describe models where the bias is linear and deterministic, but the velocities are unbiased. The peak model is remarkable because it has unbiased velocities in this same sense—peak motions are driven by dark matter flows—but, in order to achieve this, bvel must be k dependent. We speculate that this is true in general: k dependence of the spatial bias will lead to k dependence of bvel even if the biased tracers flow with the dark matter. Because of the k dependence of the linear bias parameters, standard manipulations applied to the peak model will lead to k-dependent estimates of the growth factor that could erroneously be interpreted as a signature of modified dark energy or gravity. We use the Fisher formalism to show that the constraint on the growth rate f is degraded by a factor of 2 if one allows for a k-dependent velocity bias of the peak type. Our analysis also demonstrates that the Gaussian smoothing term is part and parcel of linear theory. We discuss a simple estimate of nonlinear evolution and illustrate the effect of the peak bias on the redshift space multipoles. For k≲0.1hMpc-1, the peak bias is deterministic but k

  20. Modified multidimensional scaling approach to analyze financial markets

    Science.gov (United States)

    Yin, Yi; Shang, Pengjian

    2014-06-01

    Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.

  1. Distribution function approach to redshift space distortions. Part II: N-body simulations

    Science.gov (United States)

    Okumura, Teppei; Seljak, Uroš; McDonald, Patrick; Desjacques, Vincent

    2012-02-01

    Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. We present these terms and investigate their contribution to the total as a function of wavevector k. For μ2 the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ~ 0.015hMpc-1, 10% at k ~ 0.05hMpc-1 at z = 0, while for k > 0.15hMpc-1 they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ4 term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc-1. For μ6 and μ8 we find it has very little power for k < 0.15hMpc-1, shooting up by 2-3 orders of magnitude between k < 0.15hMpc-1 and k < 0.4hMpc-1. We also compare the expansion to the full 2-d Pss(k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of Pss(k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ < 0.15hMpc-1 at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into

  2. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  3. Model Scaling Approach for the GOCE End to End Simulator

    Science.gov (United States)

    Catastini, G.; De Sanctis, S.; Dumontel, M.; Parisch, M.

    2007-08-01

    The Gravity field and steady-state Ocean Circulation Explorer (GOCE) is the first core Earth explorer of ESA's Earth observation programme of satellites for research in the Earth sciences. The objective of the mission is to produce high-accuracy, high-resolution, global measurements of the Earth's gravity field, leading to improved geopotential and geoid (the equipotential surface corresponding to the steady-state sea level) models for use in a wide range of geophysical applications. More precisely, the GOCE mission is designed to provide a global reconstruction of the geo- potential model and geoid with high spatial resolution (better than 0.1 cm at the degree and order l = 50 and better than 1.0 cm at degree and order l = 200). Such scientific performance scenario requires at least the computation of 200 harmonics of the gravitational field and a simulated time span covering a minimum of 60 days (corresponding to a full coverage of the Earth surface). Thales Alenia Space Italia (TAS-I) is responsible, as Prime Contractor, of the GOCE Satellite. The GOCE mission objective is the high-accuracy retrieval of the Earth gravity field. The idea of an End-to-End simulator (E2E) was conceived in the early stages of the GOCE programme, as an essential tool for supporting the design and verification activities as well as for assessing the satellite system performance. The simulator in its present form has been developed at TAS-I for ESA since the beginning of Phase B and is currently used for: checking the consistency of spacecraft and payload specifications with the overall system requirements supporting trade-off, sensitivity and worst-case analyses supporting design and pre-validation testing of the Drag-Free and Attitude Control (DFAC) laws preparing and testing the on-ground and in-flight gradiometer calibration concepts prototyping the post-processing algorithms, transforming the scientific data from Level 0 (raw telemetry format) to Level 1B (i.e. geo-located gravity

  4. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  5. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  6. Multi-scale approach for simulating time-delay biochemical reaction systems.

    Science.gov (United States)

    Niu, Yuanling; Burrage, Kevin; Zhang, Chengjian

    2015-02-01

    This study presents a multi-scale approach for simulating time-delay biochemical reaction systems when there are wide ranges of molecular numbers. The authors construct a new efficient approach based on partitioning into slow and fast subsets in conjunction with predictor-corrector methods. This multi-scale approach is shown to be much more efficient than existing methods such as the delay stochastic simulation algorithm and the modified next reaction method. Numerical testing on several important problems in systems biology confirms the accuracy and computational efficiency of this approach.

  7. Hypermodular Self-Assembling Space Solar Power -- Design Option for Mid-Term GEO Utility-Scale Power Plants

    CERN Document Server

    Leitgab, Martin

    2013-01-01

    This paper presents a design for scaleable space solar power systems based on free-flying reflectors and module self-assembly. Lower system cost of utility-scale space solar power is achieved by design independence of yet-to-be-built in-space assembly or transportation infrastructure. Using current and expected near-term technology, this study describe a design for mid-term utility-scale power plants in geosynchronous orbits. High-level economic considerations in the context of current and expected future launch costs are given as well.

  8. Spatial scale invariance of aggregated dynamics - Application to crops cycle observed from space

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.

    2014-12-01

    Observational data is always associated to specific time and space scales. When the observed area of study is homogeneous, the same dynamics can be expected at different observed scales. It is generally not the case. This is a common obstacle when comparing data or products of different resolution. This question is investigated here considering the cycles of rainfed crops observed from space in semi-arid regions. In such context, the rainfed crops are coupled to the climatic dynamics in a synchronized way, the observational signal can thus be seen as an aggregation of phase synchronized dynamics. In the first part of this work, a case study is implemented. Rössler chaotic systems are used for this purpose as elementary oscillators relating to homogeneous behavior. The 'observational' signal is obtained by aggregating additively the signals of several elementary chaotic systems. Analytically, it is found that the aggregated signal can be approximated by the Rössler system itself but with some parameterization changes. This result can be generalized to any system for which a canonical approximation is possible. Using the global modeling technique [1], this theoretical result is then illustrated practically, by showing that an approximation of the Rössler dynamics can be retrieved, without any a priori knowledge, from the aggregated signal. In the second part, the cycle of cereal crops observed from space in semi-arid conditions is investigated from real observational data (the GIMMS product of Normalized Difference Vegetation Index [2] is used for this purpose). A low-dimensional chaotic model could recently be obtained from a spatially aggregated signal which presents properties never observed from real data before: a toroidal and weakly dissipative dynamics [3]. These unusual properties are then retrieved at various places and scales. [1] Mangiarotti S., Coudret R., Drapeau L. & Jarlan L., 2012. Polynomial search and Global modelling: two algorithms for modeling

  9. Space Station Freedom - Configuration management approach to supporting concurrent engineering and total quality management. [for NASA Space Station Freedom Program

    Science.gov (United States)

    Gavert, Raymond B.

    1990-01-01

    Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed.

  10. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  11. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  12. A classification approach for genotyping viral sequences based on multidimensional scaling and linear discriminant analysis.

    Science.gov (United States)

    Kim, Jiwoong; Ahn, Yongju; Lee, Kichan; Park, Sung Hee; Kim, Sangsoo

    2010-08-21

    Accurate classification into genotypes is critical in understanding evolution of divergent viruses. Here we report a new approach, MuLDAS, which classifies a query sequence based on the statistical genotype models learned from the known sequences. Thus, MuLDAS utilizes full spectra of well characterized sequences as references, typically of an order of hundreds, in order to estimate the significance of each genotype assignment. MuLDAS starts by aligning the query sequence to the reference multiple sequence alignment and calculating the subsequent distance matrix among the sequences. They are then mapped to a principal coordinate space by multidimensional scaling, and the coordinates of the reference sequences are used as features in developing linear discriminant models that partition the space by genotype. The genotype of the query is then given as the maximum a posteriori estimate. MuLDAS tests the model confidence by leave-one-out cross-validation and also provides some heuristics for the detection of 'outlier' sequences that fall far outside or in-between genotype clusters. We have tested our method by classifying HIV-1 and HCV nucleotide sequences downloaded from NCBI GenBank, achieving the overall concordance rates of 99.3% and 96.6%, respectively, with the benchmark test dataset retrieved from the respective databases of Los Alamos National Laboratory. The highly accurate genotype assignment coupled with several measures for evaluating the results makes MuLDAS useful in analyzing the sequences of rapidly evolving viruses such as HIV-1 and HCV. A web-based genotype prediction server is available at http://www.muldas.org/MuLDAS/.

  13. Solar chimney: A sustainable approach for ventilation and building space conditioning

    Directory of Open Access Journals (Sweden)

    Lal, S.,

    2013-03-01

    Full Text Available The residential and commercial buildings demand increase with rapidly growing population. It leads to the vertical growth of the buildings and needs proper ventilation and day-lighting. The natural air ventilation system is not significantly works in conventional structure, so fans and air conditioners are mandatory to meet the proper ventilation and space conditioning. Globally building sector consumed largest energy and utmost consumed in heating, ventilation and space conditioning. This load can be reduced by application of solar chimney and integrated approaches in buildings for heating, ventilation and space conditioning. It is a sustainable approach for these applications in buildings. The authors are reviewed the concept, various method of evaluation, modelings and performance of solar chimney variables, applications and integrated approaches.

  14. Semiconductor Nanocrystal Quantum Dot Synthesis Approaches Towards Large-Scale Industrial Production for Energy Applications.

    Science.gov (United States)

    Hu, Michael Z; Zhu, Ting

    2015-12-01

    This paper reviews the experimental synthesis and engineering developments that focused on various green approaches and large-scale process production routes for quantum dots. Fundamental process engineering principles were illustrated. In relation to the small-scale hot injection method, our discussions focus on the non-injection route that could be scaled up with engineering stir-tank reactors. In addition, applications that demand to utilize quantum dots as "commodity" chemicals are discussed, including solar cells and solid-state lightings.

  15. Large-Scale Hollow Retroreflectors for Lunar Laser Ranging at Goddard Space Flight Center

    Science.gov (United States)

    Preston, Alix M.

    2012-05-01

    Laser ranging to the retroreflector arrays placed on the lunar surface by the Apollo astronauts and the Soviet Luna missions have dramatically increased our understanding of gravitational physics along with Earth and Moon geophysics, geodesy, and dynamics. Although the precision of the range measurements has historically been limited by the ground station capabilities, advances in the APOLLO instrument at the Apache Point facility in New Mexico is beginning to be limited by errors associated with the lunar arrays. We report here on efforts at Goddard Space Flight Center to develop the next generation of lunar retroreflectors. We will describe a new facility that is being used to design, assemble, and test large-scale hollow retroreflectors. We will also describe results from investigations into various bonding techniques used to assemble the open corner cubes and mirror coatings that have dust mitigation properties.

  16. Space-based Remote Sensing: A Tool for Studying Bird Migration Across Multiple Scales

    Science.gov (United States)

    Smith, James A.

    2005-01-01

    The study of bird migration on a global scale is one of the compelling and challenging problems of modern biology with major implications for human health and conservation biology. Migration and conservation efforts cross national boundaries and are subject to numerous international agreements and treaties. Space based technology offers new opportunities to shed understanding on the distribution and migration of organisms on the planet and their sensitivity to human disturbances and environmental changes. Our working hypothesis is that individual organism biophysical models of energy and water balance, driven by satellite measurements of spatio-temporal gradients in climate and habitat, will help us to explain the variability in avian species richness and distribution. Further, these models provide an ecological forecasting tool for science and application users to visualize the possible consequences of loss of wetlands, flooding, or other natural disasters such as hurricanes on avian biodiversity and bird migration.

  17. Quantum-limited measurement of space-time curvature with scaling beyond the conventional Heisenberg limit

    Science.gov (United States)

    Kish, S. P.; Ralph, T. C.

    2017-10-01

    We study the problem of estimating the phase shift due to the general relativistic time dilation in the interference of photons using a nonlinear Mach-Zender interferometer setup. By introducing two nonlinear Kerr materials, one in the bottom and one in the top arm, we can measure the nonlinear phase ϕNL produced by the space-time curvature and achieve a scaling of the standard deviation with photon number (N ) of 1 /Nβ where β >1 , which exceeds the conventional Heisenberg limit of a linear interferometer (1 /N ). The nonlinear phase shift is an effect that is amplified by the intensity of the probe field. In a regime of high photon number, this effect can dominate over the linear phase shift.

  18. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    Science.gov (United States)

    Li, Dengwang; Li, Hongsheng; Wan, Honglin; Chen, Jinhu; Gong, Guanzhong; Wang, Hongjun; Wang, Liming; Yin, Yong

    2012-08-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5-8% for mono-modality and 10-14% for multi-modality registration under the same condition. Furthermore, clinical application by adaptive

  19. An analytical approach to space charge distortions for time projection chambers

    CERN Document Server

    Rossegger, S; Riegler, W

    2010-01-01

    In a time projection chamber (TPC), the possible ion feedback and also the primary ionization of high multiplicity events result in accumulation of ionic charges inside the gas volume (space charge). This charge introduces electrical field distortions and modifies the cluster trajectory along the drift path, affecting the tracking performance of the detector. In order to calculate the track distortions due to an arbitrary space charge distribution in the TPC, novel representations of the Green's function for a TPC geometry were worked out. This analytical approach finally permits accurate predictions of track distortions due to an arbitrary space charge distribution by solving the Langevin equation.

  20. Cotton trash assessment in radiographic x-ray images with scale-space filtering and stereo analysis

    Science.gov (United States)

    Dogan, Mehmet S.; Sari-Sarraf, Hamed; Hequet, Eric F.

    2005-02-01

    Trash content of raw cotton is a critical quality attribute. Therefore, accurate trash assessment is crucial for evaluating cotton"s processing and market value. Current technologies, including gravimetric and surface scanning methods, suffer from various limitations. Furthermore, worldwide, the most commonly used method is still human grading. One of the best alternatives to the aforementioned approaches is 2D x-ray imaging since it allows a thorough analysis of contaminants in a very precise and quick manner. The segmentation of trash particles in 2D transmission images is difficult since the background cotton is not uniform. Furthermore, there is considerable overlap between the gray levels of trash and cotton. We dealt with this problem by characterizing and identifying the background cotton via scale-space filtering, followed by a "background normalization" process that removes the background cotton, while leaving the trash particles intact. Furthermore, we have successfully employed stereo x-ray vision for recovering the depth information of the piled trash in controlled samples. Finally, the proposed technique was tested on 280 cotton radiographs-with various trash levels-and the results compared favorably to the existing systems of cotton trash evaluation. Given that the approach described here provides the trash mass in real-time, when realized, it will have a wide-spread impact on the cotton industry.

  1. Phase-space approach to lensless measurements of optical field correlations.

    Science.gov (United States)

    Sharma, Katelynn A; Brown, Thomas G; Alonso, Miguel A

    2016-07-11

    We analyze and test a general approach for efficiently measuring space-variant partially coherent quasi-monochromatic fields using only amplitude masks and free propagation. A phase-space description is presented to analyze approaches of this type and understand their limitations. Three variants of the method are discussed and compared, the first using an aperture mask, the second employing both an obstacle (the exact inverse of the aperture) and a clear mask, and the last combining the previous two. We discuss the advantages and disadvantages of each option.

  2. Full-Scale System for Quantifying Leakage of Docking System Seals for Space Applications

    Science.gov (United States)

    Dunlap, Patrick H., Jr.; Daniels, Christopher C.; Steinetz, Bruce M.; Erker, Arthur H.; Robbie, Malcolm G.; Wasowski, Janice L.; Drlik, Gary J.; Tong, Michael T.; Penney, Nicholas

    2007-01-01

    NASA is developing a new docking and berthing system to support future space exploration missions to low-Earth orbit, the Moon, and Mars. This mechanism, called the Low Impact Docking System, is designed to connect pressurized space vehicles and structures. NASA Glenn Research Center is playing a key role in developing advanced technology for the main interface seal for this new docking system. The baseline system is designed to have a fully androgynous mating interface, thereby requiring a seal-on-seal configuration when two systems mate. These seals will be approximately 147 cm (58 in.) in diameter. NASA Glenn has designed and fabricated a new test fixture which will be used to evaluate the leakage of candidate full-scale seals under simulated thermal, vacuum, and engagement conditions. This includes testing under seal-on-seal or seal-on-plate configurations, temperatures from -50 to 50 C (-58 to 122 F), operational and pre-flight checkout pressure gradients, and vehicle misalignment (plus or minus 0.381 cm (0.150 in.)) and gapping (up to 0.10 cm (0.040 in.)) conditions. This paper describes the main design features of the test rig and techniques used to overcome some of the design challenges.

  3. City and sea margins. Porto’s Marginal as scale and measure of new spaces

    Directory of Open Access Journals (Sweden)

    Giuseppe Parità

    2014-06-01

    Full Text Available The city has always been confronting with its own end and the beginning of the water system. Among the different kind of margin areas, the ones that border the cities on their watersides are particularly interesting. These new liminal territories are rich in variety and differences and are set up of several elements made of different morphologies that should be carefully read and interpreted: the need of re-thinking the morphological elements that mark an urban edge leads to the identification of several shapes and forms of the water borderlands. Borders, limits, boundaries, edges, margin areas - usually considered as an obstacle to the construction of the city - turn themselves as new possible “design materials” for building that ambiguous distance between city and the sea. The article aims to focus on the case-study of Porto’s Marginal that well explain how many ways a city can live its water edges. On a large scale, it is configured as a strip of 15 kilometers of public space. Within this continuity, the different extent of the distance between city and water leads to reflect on the different types of relationships (and therefore projects between the end of one side and the beginning of another. For Porto, those are not only urban parts, but also different geographical parts (sea, rivers, topography that distance puts in relation through the design sometimes of the line, at time of the border or of a surface. So, the analysis of these heterogeneous but continuous projects aim to focus on the several techniques of urban composition to build contemporary public spaces. On one hand they give form to a continuous “public figure”, on the other hand each one of the project can be considered as part of a “atlas” of liminal places, giving form to public spaces

  4. Spectrum-space-divided spectrum allocation approaches in software-defined elastic optical networks

    Science.gov (United States)

    Chen, Bowen; Yu, Xiaosong; Zhao, Yongli

    2017-08-01

    Recently, the architecture of elastic optical network (EON) has been proposed as a candidate solution to accommodate both huge bandwidth requirements and flexible connections in next generation optical networks. In order to improve the spectrum efficiency, we propose different spectrum-space-divided approaches and develop two integer linear programming (ILP) models and several spectrum-space-divided spectrum allocation approaches with and without dedicated-path protection in software-defined elastic optical networks (SD-EONs). Simulation results show that the ILP models achieve better performance in terms of the number of frequency slots and hop counts than the proposed spectrum-space-divided spectrum allocation approaches with and without dedicated-path protection under the static scenario of connection requests. Furthermore, we apply the spectrum-space-divided spectrum allocation approaches with and without dedicated-path protection to reduce the blocking probability and to improve spectrum efficiency under the dynamic connection requests compared to the traditional first-fit spectrum allocation approach in SD-EONs.

  5. Multi-scale analysis of hydrologic change in the Japanese megalopolis by using integrated approach

    Science.gov (United States)

    Nakayama, T.; Fujita, T.; Hashimoto, S.; Hamano, H.

    2008-12-01

    We coupled the process-based NIES Integrated Catchment-based Eco-hydrology (NICE) model (Nakayama,2008a,2008b; Nakayama and Watanabe,2004,2006,2008a,2008b; Nakayama et al.,2006,2007) to an urban canopy model and the Regional Atmospheric Modeling System (RAMS) in order to simulate the effect of urban structure and human activity on the change of water and heat cycles in the atmospheric/land and the interfacial areas of the Japanese megalopolis. The simulation was conducted with multi-scale levels in horizontally regional (130x130 mesh with a grid spacing of 2 km) - urban area (180x130 mesh with a grid spacing of 200 m), and in vertically atmosphere-surface-unsaturated-saturated layers. The model reproduced excellently the water and heat budgets including groundwater level, air temperature, and humidity, et al. in various types of natural/artificial landcover. The simulated results suggest that the latent heat flux has a strong impact on the hydrologic cycle and the cooling temperature in comparison with the heat budget analysis of observation data. Because the water temperature in an aquifer is almost constant throughout the year, it is estimated that the use of groundwater as a heat sink would be very effective for tackling the urban heat island phenomenon, particularly during the summer (Ministry of Environment, 2003). We evaluated the relationship between the effect of groundwater use to ameliorate the heat island phenomenon and the effect of infiltration on the water cycle in the catchment. The procedure to integrate the multi-scale model simulation with political scenario for the effective selection and use of ecosystem service sites in would be a very powerful approach to create thermally-pleasing environments in the megalopolis. References; Ministry of Environment,http://www.env.go.jp/air/report/h15-02/,2003. Nakayama,ECOMOD,doi:10.1016/j.ecolmodel.2008.02.017,2008a. Nakayama,FORECO,doi:10.1016/j.foreco.2008.07.017,2008b. Nakayama and Watanabe,WRR,doi:10

  6. Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center

    Science.gov (United States)

    Goldberg, Benjamin E.; Cook, Jerry

    1993-01-01

    The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.

  7. An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space

    Science.gov (United States)

    Kwan, Trevor Hocksun; Wu, Xiaofeng

    2017-03-01

    Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.

  8. Edge-preserving smoothing and segmentation of 4D images via transversely isotropic scale-space processing and fingerprint analysis

    Science.gov (United States)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T.; Huesman, Ronald H.

    2004-05-01

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  9. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  10. Comparing Laser Interferometry and Atom Interferometry Approaches to Space-Based Gravitational-Wave Measurement

    Science.gov (United States)

    Baker, John; Thorpe, Ira

    2012-01-01

    Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.

  11. Urban Multisensory Laboratory, AN Approach to Model Urban Space Human Perception

    Science.gov (United States)

    González, T.; Sol, D.; Saenz, J.; Clavijo, D.; García, H.

    2017-09-01

    An urban sensory lab (USL or LUS an acronym in Spanish) is a new and avant-garde approach for studying and analyzing a city. The construction of this approach allows the development of new methodologies to identify the emotional response of public space users. The laboratory combines qualitative analysis proposed by urbanists and quantitative measures managed by data analysis applications. USL is a new approach to go beyond the borders of urban knowledge. The design thinking strategy allows us to implement methods to understand the results provided by our technique. In this first approach, the interpretation is made by hand. However, our goal is to combine design thinking and machine learning in order to analyze the qualitative and quantitative data automatically. Now, the results are being used by students from the Urbanism and Architecture courses in order to get a better understanding of public spaces in Puebla, Mexico and its interaction with people.

  12. Rabbit: A novel approach to find data-races during state-space exploration

    OpenAIRE

    Oliveira, João Paulo dos Santos

    2012-01-01

    Data-races are an important kind of error in concurrent shared-memory programs. Software model checking is a popular approach to find them. This research proposes a novel approach to find races that complements model-checking by efficiently reporting precise warnings during state-space exploration (SSE): Rabbit. It uses information obtained across different paths explored during SSE to predict likely racy memory accesses. We evaluated Rabbit on 33 different scenarios of race, i...

  13. Testing gender invariance of the hospital anxiety and depression scale using the classical approach and Bayesian approach.

    Science.gov (United States)

    Fong, Ted C T; Ho, Rainbow T H

    2014-06-01

    Measurement invariance is an important attribute for the Hospital Anxiety and Depression Scale (HADS). Most of the confirmatory factor analysis studies on the HADS adopt the classical maximum likelihood approach. The restrictive assumptions of exact-zero cross-loadings and residual correlations in the classical approach can lead to inadequate model fit and biased parameter estimates. The present study adopted both the classical approach and the alternative Bayesian approach to examine the measurement and structural invariance of the HADS across gender. A Chinese sample of 326 males and 427 females was used to examine the two-factor model of the HADS across gender. Configural and scalar invariance of the HADS were evaluated using the classical approach with the robust-weighted least-square estimator and the Bayesian approach with zero-mean, small-variance informative priors to cross-loadings and residual correlations. Acceptable and excellent model fits were found for the two-factor model under the classical and Bayesian approaches, respectively. The two-factor model displayed scalar invariance across gender using both approaches. In terms of structural invariance, females showed a significantly higher mean in the anxiety factor than males under both approaches. The HADS demonstrated measurement invariance across gender and appears to be a well-developed instrument for assessment of anxiety and depression. The Bayesian approach is an alternative and flexible tool that could be used in future invariance studies.

  14. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  15. Multi-atlas and unsupervised learning approach to perirectal space segmentation in CT images.

    Science.gov (United States)

    Ghose, Soumya; Denham, James W; Ebert, Martin A; Kennedy, Angel; Mitra, Jhimli; Dowling, Jason A

    2016-12-01

    Perirectal space segmentation in computed tomography images aids in quantifying radiation dose received by healthy tissues and toxicity during the course of radiation therapy treatment of the prostate. Radiation dose normalised by tissue volume facilitates predicting outcomes or possible harmful side effects of radiation therapy treatment. Manual segmentation of the perirectal space is time consuming and challenging in the presence of inter-patient anatomical variability and may suffer from inter- and intra-observer variabilities. However automatic or semi-automatic segmentation of the perirectal space in CT images is a challenging task due to inter patient anatomical variability, contrast variability and imaging artifacts. In the model presented here, a volume of interest is obtained in a multi-atlas based segmentation approach. Un-supervised learning in the volume of interest with a Gaussian-mixture-modeling based clustering approach is adopted to achieve a soft segmentation of the perirectal space. Probabilities from soft clustering are further refined by rigid registration of the multi-atlas mask in a probabilistic domain. A maximum a posteriori approach is adopted to achieve a binary segmentation from the refined probabilities. A mean volume similarity value of 97% and a mean surface difference of 3.06 ± 0.51 mm is achieved in a leave-one-patient-out validation framework with a subset of a clinical trial dataset. Qualitative results show a good approximation of the perirectal space volume compared to the ground truth.

  16. Activity markers and household space in Swahili urban contexts: An integrated geoarchaeological approach

    DEFF Research Database (Denmark)

    Wynne-Jones, Stephanie; Sulas, Federica

    , this paper draws from recent work at a Swahili urban site to illustrate the potential and challenges of an integrated geoarchaeological approach to the study of household space. The site of Songo Mnara (14th–16thc. AD) thrived as a Swahili stonetown off the coast of Tanzania. Here, our work has concentrated...

  17. Electronic transport through nanowires: a real-space finite-difference approach

    NARCIS (Netherlands)

    Khomyakov, Petr

    2006-01-01

    Nanoelectronics is a fast developing ¯eld. Therefore understanding of the electronic transport at the nanoscale is currently of great interest. This thesis "Electronic transport through nanowires: a real-space ¯nite-difference approach" aims at a general theoretical treatment of coherent electronic

  18. Worldview and Social Practice : A discourse-space approach to political text analysis

    NARCIS (Netherlands)

    Kaal, A.R.

    2017-01-01

    Worldview and Social Practice takes a cognitive-discourse approach to semantic analysis of worldview constructions along Space, Time and Attitude (STA) coordinates. It demonstrates how variations in STA schemas shape Dutch political parties’ worldviews and the rationale behind their election

  19. Learning in the Liminal Space: A Semiotic Approach to Threshold Concepts

    Science.gov (United States)

    Land, Ray; Rattray, Julie; Vivian, Peter

    2014-01-01

    The threshold concepts approach to student learning and curriculum design now informs an empirical research base comprising over 170 disciplinary and professional contexts. It draws extensively on the notion of troublesomeness in a "liminal" space of learning. The latter is a transformative state in the process of learning in which there…

  20. Concept of Draft International Standard for a Unified Approach to Space Program Quality Assurance

    Science.gov (United States)

    Stryzhak, Y.; Vasilina, V.; Kurbatov, V.

    2002-01-01

    For want of the unified approach to guaranteed space project and product quality assurance, implementation of many international space programs has become a challenge. Globalization of aerospace industry and participation of various international ventures with diverse quality assurance requirements in big international space programs requires for urgent generation of unified international standards related to this field. To ensure successful fulfillment of space missions, aerospace companies should design and process reliable and safe products with properties complying or bettering User's (or Customer's) requirements. Quality of the products designed or processed by subcontractors (or other suppliers) should also be in compliance with the main user (customer)'s requirements. Implementation of this involved set of unified requirements will be made possible by creating and approving a system (series) of international standards under a generic title Space Product Quality Assurance based on a system consensus principle. Conceptual features of the baseline standard in this system (series) should comprise: - Procedures for ISO 9000, CEN and ECSS requirements adaptation and introduction into space product creation, design, manufacture, testing and operation; - Procedures for quality assurance at initial (design) phases of space programs, with a decision on the end product made based on the principle of independence; - Procedures to arrange incoming inspection of products delivered by subcontractors (including testing, audit of supplier's procedures, review of supplier's documentation), and space product certification; - Procedures to identify materials and primary products applied; - Procedures for quality system audit at the component part, primary product and materials supplier facilities; - Unified procedures to form a list of basic performances to be under configuration management; - Unified procedures to form a list of critical space product components, and unified

  1. Service Areas of Local Urban Green Spaces: AN Explorative Approach in Arroios, Lisbon

    Science.gov (United States)

    Figueiredo, R.; Gonçalves, A. B.; Ramos, I. L.

    2016-09-01

    The identification of service areas of urban green spaces and areas with lack of these is increasingly necessary within city planning and management, as it translates into important indicators for the assessment of quality of life. In this setting, it is important to evaluate the attractiveness and accessibility dynamics through a set of attributes, taking into account the local reality of the territory under study. This work presents an operational methodology associated with these dynamics in local urban green spaces, assisting in the planning and management of this type of facilities. The methodology is supported firstly on questionnaire surveys and then on network analysis, processing spatial data in a Geographic Information Systems (GIS) environment. In the case study, two local green spaces in Lisbon were selected, on a local perspective explorative approach. Through field data, it was possible to identify service areas for both spaces, and compare the results with references in the literature. It was also possible to recognise areas with lack of these spaces. The difficulty to evaluate the dynamics of real individuals in their choices of urban green spaces and the respective route is a major challenge to the application of the methodology. In this sense it becomes imperative to develop different instruments and adapt them to other types of urban green spaces.

  2. A state space approach for the eigenvalue problem of marine risers

    KAUST Repository

    Alfosail, Feras

    2017-10-05

    A numerical state-space approach is proposed to examine the natural frequencies and critical buckling limits of marine risers. A large axial tension in the riser model causes numerical limitations. These limitations are overcome by using the modified Gram–Schmidt orthonormalization process as an intermediate step during the numerical integration process with the fourth-order Runge–Kutta scheme. The obtained results are validated against those obtained with other numerical methods, such as the finite-element, Galerkin, and power-series methods, and are found to be in good agreement. The state-space approach is shown to be computationally more efficient than the other methods. Also, we investigate the effect of a high applied tension, a high apparent weight, and higher-order modes on the accuracy of the numerical scheme. We demonstrate that, by applying the orthonormalization process, the stability and convergence of the approach are significantly improved.

  3. Swamp Works: A New Approach to Develop Space Mining and Resource Extraction Technologies at the National Aeronautics Space Administration (NASA) Kennedy Space Center (KSC)

    Science.gov (United States)

    Mueller, R. P.; Sibille, L.; Leucht, K.; Smith, J. D.; Townsend, I. I.; Nick, A. J.; Schuler, J. M.

    2015-01-01

    environment and methodology, with associated laboratories that uses lean development methods and creativity-enhancing processes to invent and develop new solutions for space exploration. This paper will discuss the Swamp Works approach to developing space mining and resource extraction systems and the vision of space development it serves. The ultimate goal of the Swamp Works is to expand human civilization into the solar system via the use of local resources utilization. By mining and using the local resources in situ, it is conceivable that one day the logistics supply train from Earth can be eliminated and Earth independence of a space-based community will be enabled.

  4. Oxygen isotopes as a tool to quantify reservoir-scale CO2 pore-space saturation

    Science.gov (United States)

    Serno, Sascha; Flude, Stephanie; Johnson, Gareth; Mayer, Bernard; Boyce, Adrian; Karolyte, Ruta; Haszeldine, Stuart; Gilfillan, Stuart

    2017-04-01

    Structural and residual trapping of carbon dioxide (CO2) are two key mechanisms of secure CO2 storage, an essential component of Carbon Capture and Storage technology [1]. Estimating the amount of CO2 that is trapped by these two mechanisms is a vital requirement for accurately assessing the secure CO2 storage capacity of a formation, but remains a key challenge. Recent field [2,3] and laboratory experiment studies [4] have shown that simple and relatively inexpensive measurements of oxygen isotope ratios in both the injected CO2 and produced water can provide an assessment of the amount of CO2 that is stored by these processes. These oxygen isotope assessments on samples obtained from observation wells provide results which are comparable to other geophysical techniques. In this presentation, based on the first comprehensive review of oxygen isotope ratios measured in reservoir waters and CO2 from global CO2 injection projects, we will outline the advantages and potential limitations of using oxygen isotopes to quantify CO2 pore-space saturation. We will further summarise the currently available information on the oxygen isotope composition of captured CO2. Finally, we identify the potential issues in the use of the oxygen isotope shifts in the reservoir water from baseline conditions to estimate accurate saturations of the pore space with CO2, and suggest how these issues can be reduced or avoided to provide reliable CO2 pore-space saturations on a reservoir scale in future field experiments. References [1] Scott et al., (2013) Nature Climate Change, Vol. 3, 105-111 doi:10.1038/nclimate1695 [2] Johnson et al., (2011) Chemical Geology, Vol. 283, 185-193 http://dx.doi.org/10.1016/j.ijggc.2016.06.019 [3] Serno et al., (2016) IJGGC, Vol. 52, 73-83 http://dx.doi.org/10.1016/j.ijggc.2016.06.019 [4] Johnson et al., (2011) Applied Geochemistry, Vol. 26 (7) 1184-1191 http://dx.doi.org/10.1016/j.apgeochem.2011.04.007

  5. Multi scale Disaster Risk Reduction Systems Space and Community based Experiences over HKH Region

    Science.gov (United States)

    Gurung, D. R.; Shrestha, M.; Shrestha, N.; Debnath, B.; Jishi, G.; Bajracharya, R.; Dhonju, H. K.; Pradhan, S.

    2014-11-01

    An increasing trend in the recurrence of natural disasters and associated impacts due to Floods, Glacier Lake out bursts, landslides and forest fire is reported over Hindu Kush Himalyan (HKH) region. Climate change and anthropogenic coupled factors are identified as primary factors for such increased vulnerability. The large degree of poverty, lack of infrastructure, poor accessibility and uncertainties involved in understanding high altitude land surface and climate dynamics poses serious challenges in reducing disaster vulnerability and mitigating disaster impacts. In this context effective development of Disaster Risk Reduction (DRR) protocols and mechanisms have been realized as an urgent need. The paper presents the adoption and experiences of multi scale DRR systems across different Himalayan member countries ranging from community based indigenous early warning to space based emergency response and decision support systems. The Establishment of a Regional Flood Information System (HKH-HYCOS) over Ganges-Brahmaputra-Meghna (GBM) and Indus river basins promoted the timely exchange of flood data and information for the reduction of flood vulnerability within and among the participating countries. Satellite based forest fire alert systems evoked significant response among diverse stakeholders to optimize fire incidence and control. Satellite rainfall estimation products, satellite altimetry based flood early warning systems, flood inundation modelling and products, model derived hydrology flow products from different global data-sharing networks constitutes diverse information to support multi scale DRR systems. Community-based Flood Early Warning System (FEWS) enabled by wireless technology established over the Singara and Jiadhal rivers in Assam also stands as one of the promising examples of minimizing flood risk. Disaster database and information system and decision support tools in Nepal serves as potential tool to support diverse stakeholders.

  6. Study on spatial structure of large scale retail stores based on space syntax: case study in Wuhan

    Science.gov (United States)

    Zhan, Qingming; Zhou, Jingnan; Sliuzas, Richard

    2009-10-01

    This research analyzes the spatial pattern of large-scale stores based on space syntax theory and explores the correlation between the variations in syntax accessibility and the spatial pattern of large-scale stores. This research develops a framework of spatial topology analysis based on the space syntax theory, which includes the following modifications: the trail to break the traditional long axial line network of space syntax and apply this partitioned network in the topological analysis; the trail to analyze the bus route network; By taking both the syntax accessibility of road and bus network into consideration, we produce the scopes of urban syntax centers of city level, local level and sub local level respectively. In the analysis of the retail distribution pattern, the city level, local level and sub local level urban retail centers are suggested respectively according to the spatial distributions of the quantity and scale of the retail stores. The spatial distribution pattern of each retail format is studied as spatial correlations between the retail locations and the urban space syntax centers based on a case study in Wuhan, China. The Space Syntax can be a useful tool to explain the allocation logic of urban retail space in large cities. We suggest to apply the partitioned transportation network instead of the traditional long axial line network.

  7. Collaborative Approaches in Developing Environmental and Safety Management Systems for Commercial Space Transportation

    Science.gov (United States)

    Zee, Stacey; Murray, D.

    2009-01-01

    The Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST) licenses and permits U.S. commercial space launch and reentry activities, and licenses the operation of non-federal launch and reentry sites. ASTs mission is to ensure the protection of the public, property, and the national security and foreign policy interests of the United States during commercial space transportation activities and to encourage, facilitate, and promote U.S. commercial space transportation. AST faces unique challenges of ensuring the protection of public health and safety while facilitating and promoting U.S. commercial space transportation. AST has developed an Environmental Management System (EMS) and a Safety Management System (SMS) to help meet its mission. Although the EMS and SMS were developed independently, the systems share similar elements. Both systems follow a Plan-Do-Act-Check model in identifying potential environmental aspects or public safety hazards, assessing significance in terms of severity and likelihood of occurrence, developing approaches to reduce risk, and verifying that the risk is reduced. This paper will describe the similarities between ASTs EMS and SMS elements and how AST is building a collaborative approach in environmental and safety management to reduce impacts to the environment and risks to the public.

  8. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  9. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Owen, Megan; Petersen, Jens

    2013-01-01

    space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting...... onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others...... parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than...

  10. On the effects of small scale space-time variability of rainfall on basin flood response

    Science.gov (United States)

    Paschalis, Athanasios; Fatichi, Simone; Molnar, Peter; Rimkus, Stefan; Burlando, Paolo

    2014-06-01

    The spatio-temporal variability of rainfall, especially at fine temporal and spatial scales can significantly affect flood generation, leading to a large variability in the flood response and uncertainty in its prediction. In this study we quantify the impact of rainfall spatial and temporal structure on the catchment hydrological response based on a numerical experiment. Rainfall ensembles generated using a state-of-the-art space-time stochastic model are used as input into a distributed process-based hydrological model. The sensitivity of the hydrograph to several structural characteristics of storm rainfall for three soil moisture initial conditions is numerically assessed at the basin outlet of an Alpine catchment in central Switzerland. The results highlight that the flood response is strongly affected by the temporal correlation of rainfall and to a lesser extent by its spatial variability. Initial soil moisture conditions play a paramount role in mediating the response. We identify the underlying mechanistic explanations in terms of runoff generation and connectivity of saturated areas that determine the sensitivity of flood response to the spatio-temporal variability of rainfall. We show that the element that mostly influences both the flood peak and the time of peak occurrence is the clustering of saturated areas in the catchment which leads to local enhanced runoff.

  11. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy

    2012-03-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth. © 2011 The Korean Statistical Society.

  12. Papaya Tree Detection with UAV Images Using a GPU-Accelerated Scale-Space Filtering Method

    Directory of Open Access Journals (Sweden)

    Hao Jiang

    2017-07-01

    Full Text Available The use of unmanned aerial vehicles (UAV can allow individual tree detection for forest inventories in a cost-effective way. The scale-space filtering (SSF algorithm is commonly used and has the capability of detecting trees of different crown sizes. In this study, we made two improvements with regard to the existing method and implementations. First, we incorporated SSF with a Lab color transformation to reduce over-detection problems associated with the original luminance image. Second, we ported four of the most time-consuming processes to the graphics processing unit (GPU to improve computational efficiency. The proposed method was implemented using PyCUDA, which enabled access to NVIDIA’s compute unified device architecture (CUDA through high-level scripting of the Python language. Our experiments were conducted using two images captured by the DJI Phantom 3 Professional and a most recent NVIDIA GPU GTX1080. The resulting accuracy was high, with an F-measure larger than 0.94. The speedup achieved by our parallel implementation was 44.77 and 28.54 for the first and second test image, respectively. For each 4000 × 3000 image, the total runtime was less than 1 s, which was sufficient for real-time performance and interactive application.

  13. A continuous scale-space method for the automated placement of spot heights on maps

    Science.gov (United States)

    Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico

    2017-12-01

    Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.

  14. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  15. Space-time trajectories of wind power generation: Parameterized precision matrices under a Gaussian copula approach

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    2015-01-01

    Emphasis is placed on generating space-time trajectories of wind power generation, consisting of paths sampled from high-dimensional joint predictive densities, describing wind power generation at a number of contiguous locations and successive lead times. A modelling approach taking advantage......-correlations. Estimation is performed in a maximum likelihood framework. Based on a test case application in Denmark, with spatial dependencies over 15 areas and temporal ones for 43 hourly lead times (hence, for a dimension of n = 645), it is shown that accounting for space-time effects is crucial for generating skilful...

  16. Innovative Approaches to Space-Based Manufacturing and Rapid Prototyping of Composite Materials

    Science.gov (United States)

    Hill, Charles S.

    2012-01-01

    The ability to deploy large habitable structures, construct, and service exploration vehicles in low earth orbit will be an enabling capability for continued human exploration of the solar system. It is evident that advanced manufacturing methods to fabricate replacement parts and re-utilize launch vehicle structural mass by converting it to different uses will be necessary to minimize costs and allow flexibility to remote crews engaged in space travel. Recent conceptual developments and the combination of inter-related approaches to low-cost manufacturing of composite materials and structures are described in context leading to the possibility of on-orbit and space-based manufacturing.

  17. A scale-based approach to interdisciplinary research and expertise in sports.

    Science.gov (United States)

    Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles

    2017-02-01

    After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.

  18. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    Science.gov (United States)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  19. A multidisciplinary approach to assess the welfare of weaned pigs during transport at three space allowances.

    Science.gov (United States)

    Sutherland, Mhairi A; Bryer, Pamela J; Davis, Brittany L; McGlone, John J

    2010-01-01

    Transport can be a stressful experience for pigs, especially in pigs simultaneously experiencing weaning stress. The objective of this study was to use a multidisciplinary approach to assess the welfare of weaned pigs during transport at 3 space allowances. A commercial semitrailer, fitted with compartments, provided 0.05, 0.06, and 0.07 m(2)/pig. The study recorded frequency of standing, lying, sitting, and standing-rearing on another pig during the entire duration of transport. Blood samples, body weights, and lesion scores were collected from a subset of pigs (n = 48 per space allowance) in each experimental compartment. Transport time for the pigs was 148.0 +/- 10.0 min to the wean-to-finishing site. Total white blood cell counts, cortisol, and several blood chemistry values increased (p regardless of space allowance. Glucose and body weight decreased (p regardless of space allowance. Space allowance influenced stand-rearing, sitting, standing, and lying behaviors in pigs. Combining behavioral and physiological measures of stress provides a robust picture of piglet welfare during transport at different space allowances.

  20. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs.

  1. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  2. Fine-scale population dynamics in a marine fish species inferred from dynamic state-space models.

    Science.gov (United States)

    Rogers, Lauren A; Storvik, Geir O; Knutsen, Halvor; Olsen, Esben M; Stenseth, Nils C

    2017-07-01

    Identifying the spatial scale of population structuring is critical for the conservation of natural populations and for drawing accurate ecological inferences. However, population studies often use spatially aggregated data to draw inferences about population trends and drivers, potentially masking ecologically relevant population sub-structure and dynamics. The goals of this study were to investigate how population dynamics models with and without spatial structure affect inferences on population trends and the identification of intrinsic drivers of population dynamics (e.g. density dependence). Specifically, we developed dynamic, age-structured, state-space models to test different hypotheses regarding the spatial structure of a population complex of coastal Atlantic cod (Gadus morhua). Data were from a 93-year survey of juvenile (age 0 and 1) cod sampled along >200 km of the Norwegian Skagerrak coast. We compared two models: one which assumes all sampled cod belong to one larger population, and a second which assumes that each fjord contains a unique population with locally determined dynamics. Using the best supported model, we then reconstructed the historical spatial and temporal dynamics of Skagerrak coastal cod. Cross-validation showed that the spatially structured model with local dynamics had better predictive ability. Furthermore, posterior predictive checks showed that a model which assumes one homogeneous population failed to capture the spatial correlation pattern present in the survey data. The spatially structured model indicated that population trends differed markedly among fjords, as did estimates of population parameters including density-dependent survival. Recent biomass was estimated to be at a near-record low all along the coast, but the finer scale model indicated that the decline occurred at different times in different regions. Warm temperatures were associated with poor recruitment, but local changes in habitat and fishing pressure may

  3. Computer-aided detection of lung nodules via 3D fast radial transform, scale space representation, and Zernike MIP classification.

    Science.gov (United States)

    Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato

    2011-04-01

    The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.

  4. A Replica Inference Approach to Unsupervised Multi-Scale Image Segmentation

    OpenAIRE

    Hu, Dandan; Ronhovde, Peter; Nussinov, Zohar

    2011-01-01

    We apply a replica inference based Potts model method to unsupervised image segmentation on multiple scales. This approach was inspired by the statistical mechanics problem of "community detection" and its phase diagram. Specifically, the problem is cast as identifying tightly bound clusters ("communities" or "solutes") against a background or "solvent". Within our multiresolution approach, we compute information theory based correlations among multiple solutions ("replicas") of the same grap...

  5. Classical and statistical mechanics of celestial-scale spinning strings: Rotating space elevators

    Science.gov (United States)

    Golubović, L.; Knudsen, S.

    2009-05-01

    We introduce novel and unique class of dynamical systems, Rotating Space Elevators (RSE). The RSEs are multiply rotating systems of strings reaching into outer space. Objects sliding along RSE strings do not require internal engines or propulsion to be transported from the Earth's surface into outer space. The RSEs exhibit interesting nonlinear dynamics and statistical physics phenomena.

  6. The NASA Space Launch System Program Systems Engineering Approach for Affordability

    Science.gov (United States)

    Hutt, John J.; Whitehead, Josh; Hanson, John

    2017-01-01

    The National Aeronautics and Space Administration is currently developing the Space Launch System to provide the United States with a capability to launch large Payloads into Low Earth orbit and deep space. One of the development tenets of the SLS Program is affordability. One initiative to enhance affordability is the SLS approach to requirements definition, verification and system certification. The key aspects of this initiative include: 1) Minimizing the number of requirements, 2) Elimination of explicit verification requirements, 3) Use of certified models of subsystem capability in lieu of requirements when appropriate and 4) Certification of capability beyond minimum required capability. Implementation of each aspect is described and compared to a "typical" systems engineering implementation, including a discussion of relative risk. Examples of each implementation within the SLS Program are provided.

  7. Mentoring SFRM: A New Approach to International Space Station Flight Control Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2009-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (Operator) to a basic level of effectiveness in 1 year. SFRM training uses a twopronged approach to expediting operator certification: 1) imbed SFRM skills training into all Operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills.

  8. Linking point scale process non-linearity, catchment organization and linear system dynamics in a thermodynamic state space

    Science.gov (United States)

    Zehe, Erwin; Loritz, Ralf; Ehret, Uwe; Westhoff, Martijn; Kleidon, Axel; Savenije, Hubert

    2017-04-01

    It is flabbergasting to note that catchment systems often behave almost linearly, despite of the strong non-linearity of point scale soil water characteristics. In the present study we provide evidence that a thermodynamic treatment of environmental system dynamics is the key to understand how particularly a stronger spatial organization of catchments leads to a more linear rainfall runoff behavior. Our starting point is that water fluxes in a catchment are associated with fluxes of kinetic and potential energy while changes in subsurface water stocks go along with changes in potential energy and chemical energy of subsurface water. Steady state/local equilibrium of the entire system can be defined as a state of minimum free energy, reflecting an equilibrium subsurface water storage, which is determined catchment topography, soil water characteristics and water levels in the stream. Dynamics of the entire system, i.e. deviations from equilibrium storage, are 'pseudo' oscillations in a thermodynamic state space. Either to an excess potential energy in case of wetting while subsequent relaxation back to equilibrium requires drainage/water export. Or to an excess in capillary binding energy in case of driving, while relaxation back to equilibrium requires recharge of the subsurface water stock. While system dynamics is highly non-linear on the 'too dry branch' it is essentially linear on the 'too wet branch' in case of potential energy excess. A steepened topography, which reflects a stronger spatial organization, reduces the equilibrium storage of the catchment system to smaller values, thereby it increases the range of states where the systems behaves linearly due to an excess in potential energy. Contrarily to this a shift to finer textured soils increases the equilibrium storage, which implies that the range of states where the systems behaves linearly is reduced. In this context it is important to note that an increased internal organization of the system due to

  9. Scaling and universality in city space syntax: Between Zipf and Matthew

    Science.gov (United States)

    Volchenkov, D.; Blanchard, Ph.

    2008-04-01

    We report about the universality of rank-integration distributions of open spaces in city space syntax similar to the famous rank-size distributions of cities (Zipf’s law). We also demonstrate that the degree of choice an open space represents for other spaces directly linked to it in a city follows a power-law statistic. Universal statistical behavior of space syntax measures uncovers the universality of the city creation mechanism. We suggest that the observed universality may help to establish the international definition of a city as a specific land use pattern.

  10. New pharmacologic approaches to the prevention of space/motion sickness

    Science.gov (United States)

    Kohl, Randall L.; Macdonald, Scott

    1991-01-01

    Three fundamental approaches used in the selection of new agents for the evaluation in the prevention of space-motion sickness (SMS) are reviewed, with emphasis on drugs under investigation at the Johnson Space Center. These approaches are: (1) the selection of agents from drug classes that possess pharmacologic properties of established antimotion sickness agents, (2) the selection of drugs that are used to prevent emesis caused by means other than the exposure to motion, and (3) basic research that characterizes individual differences in susceptibility to SMS. In the latter type of studies, it was found that subjects who were more resistant to SMS had higher plasma AVP after severe nausea than subjects with lower resistance. The review details the experimental data collected on AVP and adrenocorticotropin. It is noted that data support interrelated roles for AVP and opioid peptides in SMS.

  11. The Opportunity in Commercial Approaches for Future NASA Deep Space Exploration Elements

    Science.gov (United States)

    Zapata, Edgar

    2017-01-01

    In 2011, NASA released a report assessing the market for commercial crew and cargo services to low Earth orbit (LEO). The report stated that NASA had spent a few hundred million dollars in the Commercial Orbital Transportation Services (COTS) program on the portion related to the development of the Falcon 9 launch vehicle. Yet a NASA cost model predicted the cost would have been significantly more with a non-commercial cost-plus contracting approach. By 2016 a NASA request for information stated it must "maximize the efficiency and sustainability of the Exploration Systems development programs", as "critical to free resources for reinvestment...such as other required deep space exploration capabilities." This work joins the previous two events, showing the potential for commercial, public private partnerships, modeled on programs like COTS, to reduce the cost to NASA significantly for "...other required deep space exploration capabilities." These other capabilities include landers, stages and more. We mature the concept of "costed baseball cards", adding cost estimates to NASA's space systems "baseball cards." We show some potential costs, including analysis, the basis of estimates, data sources and caveats to address a critical question - based on initial assessment, are significant agency resources justified for more detailed analysis and due diligence to understand and invest in public private partnerships for human deep space exploration systems? The cost analysis spans commercial to cost-plus contracting approaches, for smaller elements vs. larger, with some variation for lunar or Mars. By extension, we delve briefly into the potentially much broader significance of the individual cost estimates if taken together as a NASA investment portfolio where public private partnership are stitched together for deep space exploration. How might multiple improvements in individual systems add up to NASA human deep space exploration achievements, realistically, affordably

  12. Contaminant ingress into multizone buildings: An analytical state-space approach

    DEFF Research Database (Denmark)

    Parker, Simon; Coffey, Chris; Gravesen, Jens

    2014-01-01

    -space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short...... term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models...

  13. Formulation space search approach for the teacher/class timetabling problem

    Directory of Open Access Journals (Sweden)

    Kochetov Yuri

    2008-01-01

    Full Text Available We consider the well known NP-hard teacher/class timetabling problem. Variable neighborhood search and tabu search heuristics are developed based on idea of the Formulation Space Search approach. Two types of solution representation are used in the heuristics. For each representation we consider two families of neighborhoods. The first family uses swapping of time periods for teacher (class timetable. The second family bases on the idea of large Kernighan-Lin neighborhoods. Computation results for difficult random test instances show high efficiency of the proposed approach. .

  14. Infinite-dimensional approach to system identification of Space Control Laboratory Experiment (SCOLE)

    Science.gov (United States)

    Hossain, S. A.; Lee, K. Y.

    1988-01-01

    The identification of a unique set of system parameters in large space structures poses a significant new problem in control technology. Presented is an infinite-dimensional identification scheme to determine system parameters in large flexible structures in space. The method retains the distributed nature of the structure throughout the development of the algorithm and a finite-element approximation is used only to implement the algorithm. This approach eliminates many problems associated with model truncation used in other methods of identification. The identification is formulated in Hilbert space and an optimal control technique is used to minimize weighted least squares of error between the actual and the model data. A variational approach is used to solve the problem. A costate equation, gradients of parameter variations and conditions for optimal estimates are obtained. Computer simulation studies are conducted using a shuttle-attached antenna configuration, more popularly known as the Space Control Laboratory Experiment (SCOLE) as an example. Numerical results show a close match between the estimated and true values of the parameters.

  15. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    Science.gov (United States)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  16. Investigation of the scaling characteristics of LANDSAT temperature and vegetation data: a wavelet-based approach.

    Science.gov (United States)

    Rathinasamy, Maheswaran; Bindhu, V M; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh

    2017-10-01

    An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.

  17. Investigation of the scaling characteristics of LANDSAT temperature and vegetation data: a wavelet-based approach

    Science.gov (United States)

    Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh

    2017-10-01

    An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.

  18. Biocultural approaches to well-being and sustainability indicators across scales

    Science.gov (United States)

    Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter

    2017-01-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...

  19. Small-scale chemistry for a hands-on approach to chemistry ...

    African Journals Online (AJOL)

    The purpose of this study was to investigate the possibility of using a small-scale chemistry (SSC) approach as a means of performing chemistry practical activities in Ethiopian secondary schools. A total of eight experiments from two topics, electrolysis and rate of reaction, in the Ethiopian grade 11 chemistry syllabus were ...

  20. Scaling approach to related disordered stochastic and free-fermion models

    Science.gov (United States)

    Harris, R. J.; Stinchcombe, R. B.

    2007-03-01

    Motivated by mapping from a stochastic system with spatially random rates, we consider disordered nonconserving free-fermion systems using a scaling procedure for the equations of motion. This approach demonstrates disorder-induced localization acting in competition with the asymmetric driving. We discuss the resulting implications for the original stochastic system.

  1. Volume-area scaling approach versus flowline model in glacier volume projections

    NARCIS (Netherlands)

    Radic, V.; Hock, Regine; Oerlemans, J.

    2007-01-01

    Volume–area scaling provides a practical alternative to ice-flow modelling to account for glacier size changes when modelling the future evolution of glaciers; however, uncertainties remain as to the validity of this approach under non-steady conditions. We address these uncertainties by deriving

  2. State-Space Approach to Structural Representation of Perturbed Pitch Period Sequences in Voice Signals.

    Science.gov (United States)

    Alzamendi, Gabriel A; Schlotthauer, Gastón; Torres, María E

    2015-11-01

    The aim of this study was to propose a state space-based approach to model perturbed pitch period sequences (PPSs), extracted from real sustained vowels, combining the principal features of disturbed real PPSs with structural analysis of stochastic time series and state space methods. The PPSs were obtained from a database composed of 53 healthy subjects. State space models were developed taking into account different structures and complexity levels. PPS features such as trend, cycle, and irregular structures were considered. Model parameters were calculated using optimization procedures. For each PPS, state estimates were obtained combining the developed models and diffuse initialization with filtering and smoothing methods. Statistical tests were applied to objectively evaluate the performance of this method. Statistical tests demonstrated that the proposed approach correctly represented more than the 75% of the database with a significance value of 0.05. In the analysis, structural estimates suitably characterized the dynamics of the PPSs. Trend estimates proved to properly represent slow long-term dynamics, whereas cycle estimates captured short-term autoregressive dependencies. The present study demonstrated that the proposed approach is suitable for representing and analyzing real perturbed PPSs, also allowing to extract further information related to the phonation process. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. Using a Space Filling Curve Approach for the Management of Dynamic Point Clouds

    Science.gov (United States)

    Psomadaki, S.; van Oosterom, P. J. M.; Tijssen, T. P. M.; Baart, F.

    2016-10-01

    Point cloud usage has increased over the years. The development of low-cost sensors makes it now possible to acquire frequent point cloud measurements on a short time period (day, hour, second). Based on the requirements coming from the coastal monitoring domain, we have developed, implemented and benchmarked a spatio-temporal point cloud data management solution. For this reason, we make use of the flat model approach (one point per row) in an Index Organised Table within a RDBMS and an improved spatio-temporal organisation using a Space Filling Curve approach. Two variants coming from two extremes of the space-time continuum are also taken into account, along with two treatments of the z dimension: as attribute or as part of the space filling curve. Through executing a benchmark we elaborate on the performance - loading and querying time -, and storage required by those different approaches. Finally, we validate the correctness and suitability of our method, through an out-of-the-box way of managing dynamic point clouds.

  4. A Programmatic and Engineering Approach to the Development of a Nuclear Thermal Rocket for Space Exploration

    Science.gov (United States)

    Bordelon, Wayne J., Jr.; Ballard, Rick O.; Gerrish, Harold P., Jr.

    2006-01-01

    With the announcement of the Vision for Space Exploration on January 14, 2004, there has been a renewed interest in nuclear thermal propulsion. Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions; however, the cost to develop a nuclear thermal rocket engine system is uncertain. Key to determining the engine development cost will be the engine requirements, the technology used in the development and the development approach. The engine requirements and technology selection have not been defined and are awaiting definition of the Mars architecture and vehicle definitions. The paper discusses an engine development approach in light of top-level strategic questions and considerations for nuclear thermal propulsion and provides a suggested approach based on work conducted at the NASA Marshall Space Flight Center to support planning and requirements for the Prometheus Power and Propulsion Office. This work is intended to help support the development of a comprehensive strategy for nuclear thermal propulsion, to help reduce the uncertainty in the development cost estimate, and to help assess the potential value of and need for nuclear thermal propulsion for a human Mars mission.

  5. Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach

    Science.gov (United States)

    Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2015-09-01

    Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.

  6. Review of NASA approach to space radiation risk assessments for Mars exploration.

    Science.gov (United States)

    Cucinotta, Francis A

    2015-02-01

    Long duration space missions present unique radiation protection challenges due to the complexity of the space radiation environment, which includes high charge and energy particles and other highly ionizing radiation such as neutrons. Based on a recommendation by the National Council on Radiation Protection and Measurements, a 3% lifetime risk of exposure-induced death for cancer has been used as a basis for risk limitation by the National Aeronautics and Space Administration (NASA) for low-Earth orbit missions. NASA has developed a risk-based approach to radiation exposure limits that accounts for individual factors (age, gender, and smoking history) and assesses the uncertainties in risk estimates. New radiation quality factors with associated probability distribution functions to represent the quality factor's uncertainty have been developed based on track structure models and recent radiobiology data for high charge and energy particles. The current radiation dose limits are reviewed for spaceflight and the various qualitative and quantitative uncertainties that impact the risk of exposure-induced death estimates using the NASA Space Cancer Risk (NSCR) model. NSCR estimates of the number of "safe days" in deep space to be within exposure limits and risk estimates for a Mars exploration mission are described.

  7. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  8. Ethnographic Approaches to Understanding Social Sustainability in Small-scale Water Systems

    Science.gov (United States)

    Wutich, A.

    2011-12-01

    Social sustainability is an important, but often neglected, aspect of determining the success of small-scale water systems. This paper reviews ethnographic approaches for understanding how indigenous knowledge enhances social sustainability of small-scale water systems, particularly in small-scale water systems threatened by water scarcity. After reviewing the literature on common-pool and traditional resource management strategies, the paper will focus on the case of a community-managed small-scale water system in Cochabamba, Bolivia. This study uses ethnographic evidence to demonstrate how indigenous institutions can be used to manage a small-scale urban water system sustainably. Several factors were crucial to the institution's success. First, indigenous residents had previous experience with common management of rural irrigation systems which they were able to adapt for use in an urban environment. Second, institutional rules were designed to prioritize the conservation of the water source. Third, indigenous Andean social values of uniformity, regularity, and transparency ensured that community members perceived the system as legitimate and complied with community rules. Fourth, self-governance enabled community members to quickly adapt to changing environmental conditions, such as seasonal scarcity and groundwater overdraft. The paper concludes with a discussion of the promise and limitations of ethnographic approaches and indigenous knowledge for understanding social sustainability in small-scale water systems.

  9. Innovative Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, Thomas Johnathan; Noble, Cheryl Ann; Noble, C.; Martinell, John Stephen; Borowski, S.

    2000-07-01

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonable assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  10. Innovation Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, T.; Noble, C.; Martinell, J. (INEEL); Borowski, S. (NASA Glenn Research Center)

    2000-07-14

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonably assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  11. Saturated hydraulic conductivity and biofilms: A theoretical approach linking pore and pedon scale

    Science.gov (United States)

    Richter, M.; Moenickes, S.; Richter, O.; Schröder, T.

    2012-04-01

    The fate of active substances in the soil environment is shaped by soil physical properties as well as microbial life. Microorganisms degrading those substances occur in soil pores either in suspension or as biofilms on grain surfaces. At the same scale, i.e. pore scale, the soil physical properties texture, density, porosity, and water content have an impact on transport behaviour of active substances. Macroscopic parameters describe these processes at pedon scale; e.g. hydraulic conductivity summarizes the effect of named pore scale parameters. Narsilio et al. [2009] derived a relationship between the saturated hydraulic conductivity and pore scale water velocity fields based on Navier-Stokes equation for incompressible fluids. However, they did not analyse the influence of heterogeneity and microbial activity, whereas microorganisms, especially biofilms, do have an impact on hydraulic conductivity [Vandevivere and Baveye, 1992]. Biofilms alter the pore geometry while growing. This alteration directly influences the soil water flow field and hence the convective transport of active substances. Here, we present a way to couple the saturated hydraulic conductivity at macro scale to biomass population dynamics and pore space. The hydraulic conductivity will be analysed with regard to heterogeneous soils. The model combining fluid flow, reactive transport, and biofilm dynamics is applied to investigate the degradation and transport behaviour of pesticides in heterogeneous soils.

  12. Responses of Cloud Type Distributions to the Large-Scale Dynamical Circulation: Water Budget-Related Dynamical Phase Space and Dynamical Regimes

    Science.gov (United States)

    Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.

    2015-01-01

    Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.

  13. Prediction of Secondary Dendrite Arm Spacing in Squeeze Casting Using Fuzzy Logic Based Approaches

    Directory of Open Access Journals (Sweden)

    Patel M.G.C.

    2015-03-01

    Full Text Available The quality of the squeeze castings is significantly affected by secondary dendrite arm spacing, which is influenced by squeeze cast input parameters. The relationships of secondary dendrite arm spacing with the input parameters, namely time delay, pressure duration, squeeze pressure, pouring and die temperatures are complex in nature. The present research work focuses on the development of input-output relationships using fuzzy logic approach. In fuzzy logic approach, squeeze cast process variables are expressed as a function of input parameters and secondary dendrite arm spacing is expressed as an output parameter. It is important to note that two fuzzy logic based approaches have been developed for the said problem. The first approach deals with the manually constructed mamdani based fuzzy system and the second approach deals with automatic evolution of the Takagi and Sugeno’s fuzzy system. It is important to note that the performance of the developed models is tested for both linear and non-linear type membership functions. In addition the developed models were compared with the ten test cases which are different from those of training data. The developed fuzzy systems eliminates the need of a number of trials in selection of most influential squeeze cast process parameters. This will reduce time and cost of trial experimentations. The results showed that, all the developed models can be effectively used for making prediction. Further, the present research work will help foundrymen to select parameters in squeeze casting to obtain the desired quality casting without much of time and resource consuming.

  14. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    Science.gov (United States)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  15. Learning in Earth and space science: a review of conceptual change instructional approaches

    Science.gov (United States)

    Mills, Reece; Tomas, Louisa; Lewthwaite, Brian

    2016-03-01

    In response to calls for research into effective instruction in the Earth and space sciences, and to identify directions for future research, this systematic review of the literature explores research into instructional approaches designed to facilitate conceptual change. In total, 52 studies were identified and analyzed. Analysis focused on the general characteristics of the research, the conceptual change instructional approaches that were used, and the methods employed to evaluate the effectiveness of these approaches. The findings of this review support four assertions about the existing research: (1) astronomical phenomena have received greater attention than geological phenomena; (2) most studies have viewed conceptual change from a cognitive perspective only; (3) data about conceptual change were generated pre- and post-intervention only; and (4) the interventions reviewed presented limited opportunities to involve students in the construction and manipulation of multiple representations of the phenomenon being investigated. Based upon these assertions, the authors recommend that new research in the Earth and space science disciplines challenges traditional notions of conceptual change by exploring the role of affective variables on learning, focuses on the learning of geological phenomena through the construction of multiple representations, and employs qualitative data collection throughout the implementation of an instructional approach.

  16. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  17. College students with Internet addiction decrease fewer Behavior Inhibition Scale and Behavior Approach Scale when getting online.

    Science.gov (United States)

    Ko, Chih-Hung; Wang, Peng-Wei; Liu, Tai-Ling; Yen, Cheng-Fang; Chen, Cheng-Sheng; Yen, Ju-Yu

    2015-09-01

    The aim of the study is to compare the reinforcement sensitivity between online and offline interaction. The effect of gender, Internet addiction, depression, and online gaming on the difference of reinforcement sensitivity between online and offline were also evaluated. The subjects were 2,258 college students (1,066 men and 1,192 women). They completed the Behavior Inhibition Scale and Behavior Approach Scale (BIS/BAS) according to their experience online or offline. Internet addiction, depression, and Internet activity type were evaluated simultaneously. The results showed that reinforcement sensitivity was lower when interacting online than when interacting offline. College students with Internet addiction decrease fewer score on BIS and BAS after getting online than did others. The higher reward and aversion sensitivity are associated with the risk of Internet addiction. The fun seeking online might contribute to the maintenance of Internet addiction. This suggests that reinforcement sensitivity would change after getting online and would contribute to the risk and maintenance of Internet addiction. © 2014 Wiley Publishing Asia Pty Ltd.

  18. Mechanics of low-dimensional carbon nanostructures: Atomistic, continuum, and multi-scale approaches

    Science.gov (United States)

    Mahdavi, Arash

    A new multiscale modeling technique called the Consistent Atomic-scale Finite Element (CAFE) method is introduced. Unlike traditional approaches for linking the atomic structure to its equivalent continuum, this method directly connects the atomic degrees of freedom to a reduced set of finite element degrees of freedom without passing through an intermediate homogenized continuum. As a result, there is no need to introduce stress and strain measures at the atomic level. The Tersoff-Brenner interatomic potential is used to calculate the consistent tangent stiffness matrix of the structure. In this finite element formulation, all local and non-local interactions between carbon atoms are taken into account using overlapping finite elements. In addition, a consistent hierarchical finite element modeling technique is developed for adaptively coarsening and refining the mesh over different parts of the model. This process is consistent with the underlying atomic structure and, by refining the mesh to the scale of atomic spacing, molecular dynamic results can be recovered. This method is valid across the scales and can be used to concurrently model atomistic and continuum phenomena so, in contrast with most other multi-scale methods, there is no need to introduce artificial boundaries for coupling atomistic and continuum regions. Effect of the length scale of the nanostructure is also included in the model by building the hierarchy of elements from bottom up using a finite size atom cluster as the building block. To be consistent with the bravais multi-lattice structure of sp2-bonded carbon, two independent displacement fields are used for reducing the order of the model. Sparse structure of the stiffness matrix of these nanostructures is exploited to reduce the memory requirement and to speed up the formation of the system matrices and solution of the equilibrium equations. Applicability of the method is shown with several examples of the nonlinear mechanics of carbon

  19. Scale-dependence of processes structuring dung beetle metacommunities using functional diversity and community deconstruction approaches.

    Science.gov (United States)

    Silva, Pedro Giovâni da; Hernández, Malva Isabel Medina

    2015-01-01

    Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution

  20. Scale-dependence of processes structuring dung beetle metacommunities using functional diversity and community deconstruction approaches.

    Directory of Open Access Journals (Sweden)

    Pedro Giovâni da Silva

    Full Text Available Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass for identification of the main processes driving dung beetle (Scarabaeinae community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in

  1. Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques

    Science.gov (United States)

    Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution

  2. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    Science.gov (United States)

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution

  3. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    Directory of Open Access Journals (Sweden)

    Joonhoon Kim

    Full Text Available The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i two new bi-level strain design approaches using mixed-integer programming (MIP, and (ii general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions, and identified strategies for producing compounds where previous studies could not (e.g., malate and serine. Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate, which sequential search and genetic algorithms were unable to find. The approaches and

  4. Dynamical Effects of the Scale Invariance of the Empty Space: The Fall of Dark Matter?

    Science.gov (United States)

    Maeder, Andre

    2017-11-01

    The hypothesis of the scale invariance of the macroscopic empty space, which intervenes through the cosmological constant, has led to new cosmological models. They show an accelerated cosmic expansion after the initial stages and satisfy several major cosmological tests. No unknown particles are needed. Developing the weak-field approximation, we find that the here-derived equation of motion corresponding to Newton’s equation also contains a small outward acceleration term. Its order of magnitude is about \\sqrt{{\\varrho }{{c}}/\\varrho } × Newton’s gravity (ϱ being the mean density of the system and {\\varrho }{{c}} the usual critical density). The new term is thus particularly significant for very low density systems. A modified virial theorem is derived and applied to clusters of galaxies. For the Coma Cluster and Abell 2029, the dynamical masses are about a factor of 5-10 smaller than in the standard case. This tends to leave no room for dark matter in these clusters. Then, the two-body problem is studied and an equation corresponding to the Binet equation is obtained. It implies some secular variations of the orbital parameters. The results are applied to the rotation curve of the outer layers of the Milky Way. Starting backward from the present rotation curve, we calculate the past evolution of the Galactic rotation and find that, in the early stages, it was steep and Keplerian. Thus, the flat rotation curves of galaxies appear as an age effect, a result consistent with recent observations of distant galaxies by Genzel et al. and Lang et al. Finally, in an appendix we also study the long-standing problem of the increase with age of the vertical velocity dispersion in the Galaxy. The observed increase appears to result from the new small acceleration term in the equation of the harmonic oscillator describing stellar motions around the Galactic plane. Thus, we tend to conclude that neither dark energy nor dark matter seems to be needed in the proposed

  5. Thermoelectric Generators on Satellites—An Approach for Waste Heat Recovery in Space

    Directory of Open Access Journals (Sweden)

    Marian von Lukowicz

    2016-07-01

    Full Text Available Environmental radiation in space (from the Sun, etc. and operational thermal loads result in heat flows inside the structure of satellites. Today these heat flows remain unused and are collected, transported to a radiator and emitted to space to prevent the satellite from overheating, but they hold a huge potential to generate electrical power independently of solar panels. Thermoelectric generators are a promising approach for such applications because of their solid state characteristics. As they do not have any moving parts, they do not cause any vibrations in the satellite. They are said to be maintenance-free and highly reliable. Due to the expected small heat flows modern devices based on BiTe have to be considered, but these devices have no flight heritage. Furthermore, energy harvesting on space systems is a new approach for increasing the efficiency and reliability. In this paper, different systems studies and applications are discussed based some experimental characterisation of the electrical behaviour and their dependence on thermal cycles and vibration.

  6. New approaches to image processing based failure analysis of nano-scale ULSI devices

    CERN Document Server

    Zalevsky, Zeev; Gur, Eran

    2013-01-01

    New Approaches to Image Processing Based Failure Analysis of Nano-Scale ULSI Devices introduces the reader to transmission and scanning microscope image processing for metal and non-metallic microstructures. Engineers and scientists face the pressing problem in ULSI development and quality assurance: microscopy methods can't keep pace with the continuous shrinking of feature size in microelectronics. Nanometer scale sizes are below the resolution of light, and imaging these features is nearly impossible even with electron microscopes, due to image noise. This book presents novel ""smart"

  7. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Eisenbach, M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-24

    A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. In conclusion, the transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  8. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach

    Directory of Open Access Journals (Sweden)

    Stefano Mariani

    2009-01-01

    Full Text Available Failure of packaged polysilicon micro-electro-mechanical systems (MEMS subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i the propagation of stress waves inside the package; (ii the dynamics of the whole MEMS; (iii the spreading of micro-cracking in the failing part(s of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  9. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  10. Space-Hotel Early Bird - An Educational and Public Outreach Approach

    Science.gov (United States)

    Amekrane, R.; Holze, C.

    2002-01-01

    education and public outreach can be combined and how a cooperation among an association, the industry and academia can work successfully. Representatives of the DGLR and the academia developed a method to spread space related knowledge in a short time to a motivated working group. The project was a great success in the sense to involve other disciplines in space related topics by interdisciplinary work and in the sense of public and educational outreach. With more than 2.3 million contacts the DGLR e.V. promoted space and the vision of living (in) space to the public. The task of the paper is mainly to describe the approach and the experience made related to the organization, lectures, financing and outreach efforts in respect to similar future international outreach activities, which are planned for the 54th International Astronautical Congress in Bremen/Germany. www.spacehotel.org

  11. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  12. Information Extraction from Large-scale WSNs: Approaches and Research Issues Part II: Query-Based and Macroprogramming Approaches

    Directory of Open Access Journals (Sweden)

    Tessa DANIEL

    2008-07-01

    Full Text Available Regardless of the application domain and deployment scope, the ability to retrieve information is critical to the successful functioning of any wireless sensor network (WSN system. In general, information extraction procedures can be categorized into three main approaches: agent-based, query-based and macroprogramming led. Whilst query-based systems are the most popular, macroprogramming techniques provide a more general-purpose approach to distributed computation. Finally, the agent-based approaches tailor the information extraction mechanism to the type of information needed and the configuration of the network it needs to be extracted from. This suite of three papers (Part I-III offers an extensive survey of the literature in the area of WSN information extraction, covering in Part I and Part II the three main approaches above. Part III highlights the open research questions and issues faced by deployable WSN system designers and discusses the potential benefits of both in-network processing and complex querying for large scale wireless informational systems.

  13. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  14. Multidate remote sensing approaches for digital zoning of terroirs at regional scales: case studies revisited and perspectives

    Science.gov (United States)

    Vaudour, Emmanuelle; Carey, Victoria A.; Gilliot, Jean-Marc

    2014-05-01

    Geospatial technologies prove more and more useful for characterizing terroirs and this, not only at the within-field scale: amongst innovating technologies revolutionizing approaches for digitally zoning viticultural areas, be they managed by individual or cooperative grape growers, or even unions of grape growers, multispectral satellite remote sensing data have been used for 15 years already at either regional or whole-vineyard scale, starting from single date-studies to multi-temporal processings. Regional remotely-sensed approaches for terroir mapping mostly use multispectral satellite images in conjunction with a set of ancillary morphometric and/or geomorphological and/or legacy soil data and time series data on grape/wine quality and climate. Two prominent case-studies of regional terroir mapping using SPOT satellite images with medium spatial resolution (20 m) were carried out in the Southern Rhone Valley (Côtes-du-Rhône controlled Appelation of origin) in Southern France and in the Stellenbosch-Paarl region (including 5 Wine of Origin wards: Simonsberg-Stellenbosch, Simonsberg-Paarl, Jonkershoek Valley, Banghoek and Papegaaiberg and portions of two further wards, namely, Franschoek and Devon Valley) in the South Western Cape of South Africa. In addition to emphasizing their usefulness for operational land management, our objective was to develop, compare and discuss both approaches in terms of formalization, spatial data handling and processing, sampling design, validation procedures and/or availability of uncertainty information. Both approaches essentially relied on supervised image classifiers based on the selection of reference training areas. For the Southern Rhone valley, viticultural terroirs were validated using an external sample of 91 vineyards planted with Grenache Noir and Syrah for which grape composition was available over a large 17 years-period: the validation procedure highlighted a strong vintage effect for each specific terroir. The

  15. A New Approach to Space Situational Awareness using Small Ground-Based Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Cliff S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    This report discusses a new SSA approach evaluated by Pacific Northwest National Laboratory (PNNL) that may lead to highly scalable, small telescope observing stations designed to help manage the growing space surveillance burden. Using the methods and observing tools described in this report, the team was able to acquire and track very faint satellites (near Pluto’s apparent brightness). Photometric data was collected and used to correlate object orbital position as a function of atomic clock-derived time. Object apparent brightness was estimated by image analysis and nearby star calibration. The measurement performance was only limited by weather conditions, object brightness, and the sky glow at the observation site. In the future, these new SSA technologies and techniques may be utilized to protect satellite assets, detect and monitor orbiting debris fields, and support Outer Space Treaty monitoring and transparency.

  16. Large-scale tidal effect on redshift-space power spectrum in a finite-volume survey

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro; Li, Yin

    2017-04-01

    Long-wavelength matter inhomogeneities contain cleaner information on the nature of primordial perturbations as well as the physics of the early Universe. The large-scale coherent overdensity and tidal force, not directly observable for a finite-volume galaxy survey, are both related to the Hessian of large-scale gravitational potential and therefore are of equal importance. We show that the coherent tidal force causes a homogeneous anisotropic distortion of the observed distribution of galaxies in all three directions, perpendicular and parallel to the line-of-sight direction. This effect mimics the redshift-space distortion signal of galaxy peculiar velocities, as well as a distortion by the Alcock-Paczynski effect. We quantify its impact on the redshift-space power spectrum to the leading order, and discuss its importance for ongoing and upcoming galaxy surveys.

  17. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  18. Environmental Remediation Full-Scale Implementation: Back to Simple Microbial Massive Culture Approaches

    Directory of Open Access Journals (Sweden)

    Agung Syakti

    2010-10-01

    Full Text Available Using bioaugmentation and biostimulation approach for contaminated soil bioremediation were investigated and implemented on field scale. We combine those approaches by culturing massively the petrophilic indigenous microorganisms from chronically contaminated soil enriched by mixed manure. Through these methods, bioremediation performance revealed promising results in removing the petroleum hydrocarbons comparatively using metabolite by product such as biosurfactant, specific enzymes and other extra-cellular product which are considered as a difficult task and will impact on cost increase.

  19. A Bioequivalence Approach for Generic Narrow Therapeutic Index Drugs: Evaluation of the Reference-Scaled Approach and Variability Comparison Criterion.

    Science.gov (United States)

    Jiang, Wenlei; Makhlouf, Fairouz; Schuirmann, Donald J; Zhang, Xinyuan; Zheng, Nan; Conner, Dale; Yu, Lawrence X; Lionberger, Robert

    2015-07-01

    Various health communities have expressed concerns regarding whether average bioequivalence (BE) limits (80.00-125.00%) for the 90% confidence interval of the test-to-reference geometric mean ratio are sufficient to ensure therapeutic equivalence between a generic narrow therapeutic index (NTI) drug and its reference listed drug (RLD). Simulations were conducted to investigate the impact of different BE approaches for NTI drugs on study power, including (1) direct tightening of average BE limits and (2) a scaled average BE approach where BE limits are tightened based on the RLD's within-subject variability. Addition of a variability comparison (using a one-tailed F test) increased the difficulty for generic NTIs more variable than their corresponding RLDs to demonstrate bioequivalence. Based on these results, the authors evaluate the fully replicated, 2-sequence, 2-treatment, 4-period crossover study design for NTI drugs where the test product demonstrates BE based on a scaled average bioequivalence criterion and a within-subject variability comparison criterion.

  20. Modeling subjective evaluation of soundscape quality in urban open spaces: An artificial neural network approach.

    Science.gov (United States)

    Yu, Lei; Kang, Jian

    2009-09-01

    This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.

  1. A holistic approach for large-scale derived flood frequency analysis

    Science.gov (United States)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  2. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  3. Managing Deep Postanal Space Sepsis via an Intersphincteric Approach: Our Early Experience.

    Science.gov (United States)

    Tan, Ker-Kan; Koh, Dean C; Tsang, Charles B

    2013-04-01

    Managing deep postanal (DPA) sepsis often involves multiple procedures over a long time. An intersphincteric approach allows adequate drainage to be performed while tackling the primary pathology at the same sitting. The aim of our study was to evaluate this novel technique in managing DPA sepsis. A retrospective review of all patients who underwent this intersphincteric technique in managing DPA sepsis from February 2008 to October 2010 was performed. All surgeries were performed by the same surgeon. Seventeen patients with a median age of 43 years (range, 32 to 71 years) and comprised of 94.1% (n = 16) males formed the study group. In all patients, an internal opening in the posterior midline with a tract leading to the deep postanal space was identified. This intersphincteric approach operation was adopted as the primary procedure in 12 patients (70.6%) and was successful in 11 (91.7%). In the only failure, the sepsis recurred, and a successful advancement flap procedure was eventually performed. Five other patients (29.4%) underwent this same procedure as a secondary procedure after an initial drainage operation. Only one was successful. In the remaining four patients, one had a recurrent abscess that required drainage while the other three patients had a tract between the internal opening and the intersphincteric incision. They subsequently underwent a drainage procedure with seton insertion and advancement flap procedures. Managing DPA space sepsis via an intersphincteric approach is successful in 70.6% of patients. This single-staged technique allows for effective drainage of the sepsis and removal of the primary pathology in the intersphincteric space.

  4. Solution of two nucleon systems using vector variables in momentum space - an innovative approach

    Science.gov (United States)

    Veerasamy, Saravanan

    An alternate formalism that uses vector variables to treat the two-body Lippmann-Schwinger equation for realistic nucleon-nucleon potentials in momentum space is discussed in this thesis. The formalism uses the symmetry properties of the nucleon-nucleon potential and expands the nucleon-nucleon potential in terms of six linearly independent spin operators. The alternate formalism discussed in this thesis brings to light the role of time-odd spin operators. The vector variable formalism's treatment of spin degrees of freedom heavily depends on the analytical computation of hundreds of algebraic expression. A mathematical framework and computer algorithms for an automated symbolic reduction of algebraic expressions into scalar functions of vector variables are explained in this thesis. The vector variable formalism requires nucleon-nucleon potentials that are in operator form as input. The configuration space nucleon-nucleon potential Argonne V18 is one such potential that can be used for relativistic energies if it can be computed efficiently in momentum space. This thesis develops an efficient numerical technique using Chebyshev approximation to compute the Argonne V18 potential in momentum-space. The tools discussed in this thesis, the algebraic system and the efficient computation of the Argonne V18 potential in momentum space are tested by computing the binding energy and bound state wavefunctions of the deuteron using the vector variable approach. The results were successful and the first step towards a higher goal of using vector formalism of the three-body Faddeev equations for intermediate and high energies has been made.

  5. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Directory of Open Access Journals (Sweden)

    Nguyen Trung Kien

    2017-01-01

    Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  6. Approaches to 30 Percent Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Beal, D. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Martin, E. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States)

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the Building America program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  7. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Science.gov (United States)

    Nguyen, Trung Kien; Claramunt, Albert Argilaga; Caillerie, Denis; Combe, Gaël; Dal Pont, Stefano; Desrues, Jacques; Richefeu, Vincent

    2017-06-01

    The paper presents a multi-scale modeling of Boundary Value Problem (BVP) approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE). It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  8. Playing the Scales: Regional Transformations and the Differentiation of Rural Space in the Chilean Wine Industry

    Science.gov (United States)

    Overton, John; Murray, Warwick E.

    2011-01-01

    Globalization and industrial restructuring transform rural places in complex and often contradictory ways. These involve both quantitative changes, increasing the size and scope of operation to achieve economies of scale, and qualitative shifts, sometimes leading to a shift up the quality/price scale, towards finer spatial resolution and…

  9. English to Arabic Translation of the Composite Abuse Scale (CAS): A Multi-Method Approach

    OpenAIRE

    Samia Alhabib; Gene Feder; Jeremy Horwood

    2013-01-01

    BACKGROUND: The composite abuse scale (CAS) is a comprehensive tool used to measure intimate partner violence (IPV). The aim of the present study is to translate the CAS from English to Arabic. METHODS: The translation of the CAS was conducted in four stages using a multi-method approach: 1) preliminary forward translation, 2) discussion with a panel of bilingual experts, 3) focus groups discussion, and 4) back-translation of the CAS. The discussion included a linguistic validation by a compa...

  10. Time scales in the approach to equilibrium of macroscopic quantum systems.

    Science.gov (United States)

    Goldstein, Sheldon; Hara, Takashi; Tasaki, Hal

    2013-10-04

    We prove two theorems concerning the time evolution in general isolated quantum systems. The theorems are relevant to the issue of the time scale in the approach to equilibrium. The first theorem shows that there can be pathological situations in which the relaxation takes an extraordinarily long time, while the second theorem shows that one can always choose an equilibrium subspace, the relaxation to which requires only a short time for any initial state.

  11. Latent Integrated Stochastic Volatility, Realized Volatility, and Implied Volatility: A State Space Approach

    DEFF Research Database (Denmark)

    Bach, Christian; Christensen, Bent Jesper

    We include simultaneously both realized volatility measures based on high-frequency asset returns and implied volatilities backed out of individual traded at the money option prices in a state space approach to the analysis of true underlying volatility. We model integrated volatility as a latent...... process is downward biased. Implied volatility performs better than any of the alternative realized measures when forecasting future integrated volatility. The results are largely similar across the stock market (S&P 500), bond market (30-year U.S. T-bond), and foreign currency exchange market ($/£ )....

  12. Truncated Hilbert space approach to the 2d ϕ{sup 4} theory

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltan [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,Konkoly Thege Miklós út 29-33, Budapest (Hungary); Lajer, Marton [Roland Eötvös University,Pázmány Péter sétány 1/A, Budapest (Hungary)

    2016-10-11

    We apply the massive analogue of the truncated conformal space approach to study the two dimensional ϕ{sup 4} theory in finite volume. We focus on the broken phase and determine the finite size spectrum of the model numerically. We interpret the results in terms of the Bethe-Yang spectrum, from which we extract the infinite volume masses and scattering matrices for various couplings. We compare these results against semiclassical analysis and perturbation theory. We also analyze the critical point of the model and confirm that it is in the Ising universality class.

  13. Modern Gemini-Approach to Technology Development for Human Space Exploration

    Science.gov (United States)

    White, Harold

    2010-01-01

    In NASA's plan to put men on the moon, there were three sequential programs: Mercury, Gemini, and Apollo. The Gemini program was used to develop and integrate the technologies that would be necessary for the Apollo program to successfully put men on the moon. We would like to present an analogous modern approach that leverages legacy ISS hardware designs, and integrates developing new technologies into a flexible architecture This new architecture is scalable, sustainable, and can be used to establish human exploration infrastructure beyond low earth orbit and into deep space.

  14. Scaling of stomatal size and density optimizes allocation of leaf epidermal space for gas exchange in angiosperms

    Science.gov (United States)

    de Boer, Hugo Jan; Price, Charles A.; Wagner-Cremer, Friederike; Dekker, Stefan C.; Franks, Peter J.; Veneklaas, Erik J.

    2015-04-01

    Stomata on plant leaves are key traits in the regulation of terrestrial fluxes of water and carbon. The basic morphology of stomata consists of a diffusion pore and two guard cells that regulate the exchange of CO2 and water vapour between the leaf interior and the atmosphere. This morphology is common to nearly all land plants, yet stomatal size (defined as the area of the guard cell pair) and stomatal density (the number of stomata per unit area) range over three orders of magnitude across species. Evolution of stomatal sizes and densities is driven by selection pressure on the anatomical maximum stomatal conductance (gsmax), which determines the operational range of leaf gas exchange. Despite the importance of stomata traits for regulating leaf gas exchange, a quantitative understanding of the relation between adaptation of gsmax and the underlying co-evolution of stomatal sizes and densities is still lacking. Here we develop a theoretical framework for a scaling relationship between stomatal sizes and densities within the constraints set by the allocation of epidermal space and stomatal gas exchange. Our theory predicts an optimal scaling relationship that maximizes gsmax and minimizes epidermal space allocation to stomata. We test whether stomatal sizes and densities reflect this optimal scaling with a global compilation of stomatal trait data on 923 species reflecting most major clades. Our results show optimal scaling between stomatal sizes and densities across all species in the compiled data set. Our results also show optimal stomatal scaling across angiosperm species, but not across gymnosperm and fern species. We propose that the evolutionary flexibility of angiosperms to adjust stomatal sizes underlies their optimal allocation of leaf epidermal space to gas exchange.

  15. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    Science.gov (United States)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space

  16. Researcher’s Academic Culture in the Educational Space of the University: Linguo-Axiological Approach

    Directory of Open Access Journals (Sweden)

    Olena Semenog

    2017-06-01

    Full Text Available The article is devoted to the nature of the concepts “classic University”, “cultural and educational space of the University”, “research activity of future professional”, “researcher’s academic culture” and approach to academic culture as the basis of research culture in a university. It is defined that the concept of academic culture is complex. We are talking in general about the culture at the university, values, traditions, norms, rules of scientific research, and the scientific language culture, the culture of spirituality and morality, the culture of communication between science tutors and students, a culture of unique pedagogical action of master and his social, moral responsibility for the studying results. The formation of academic culture and own style, is better to develop on the positions of personal-activity, competence, axiological, cultural, acmeological approaches.

  17. The management approach to the NASA space station definition studies at the Manned Spacecraft Center

    Science.gov (United States)

    Heberlig, J. C.

    1972-01-01

    The overall management approach to the NASA Phase B definition studies for space stations, which were initiated in September 1969 and completed in July 1972, is reviewed with particular emphasis placed on the management approach used by the Manned Spacecraft Center. The internal working organizations of the Manned Spacecraft Center and its prime contractor, North American Rockwell, are delineated along with the interfacing techniques used for the joint Government and industry study. Working interfaces with other NASA centers, industry, and Government agencies are briefly highlighted. The controlling documentation for the study (such as guidelines and constraints, bibliography, and key personnel) is reviewed. The historical background and content of the experiment program prepared for use in this Phase B study are outlined and management concepts that may be considered for future programs are proposed.

  18. A Semantic Approach for the Modeling of Trajectories in Space and Time

    Science.gov (United States)

    Zheni, Donia; Frihida, Ali; Ghezala, Henda Ben; Claramunt, Christophe

    The modeling and analysis of trajectories in space and time have been long a domain of social science studies since early developments of Time Geography. Early works have been mainly conceptual, but things are changing with recent advances in telecommunications and ubiquitous computing that allow representation of moving points and trajectories within spatial database systems. These have generated a large amount of research in formal and qualitative modeling of moving points, providing many opportunities to enrich emerging geometrical-based data structures with semantic approaches. This is the objective of the research presented in this paper that introduces a semantic-based model and manipulation language of trajectories. It is based on an algebraic Spatio-Temporal Trajectory data type (STT) endowed with a set of operations designed as a way to cover the syntax and semantics of a trajectory. The approach is formally presented and illustrated by a case study.

  19. An integrative approach to space-flight physiology using systems analysis and mathematical simulation

    Science.gov (United States)

    Leonard, J. I.; White, R. J.; Rummel, J. A.

    1980-01-01

    An approach was developed to aid in the integration of many of the biomedical findings of space flight, using systems analysis. The mathematical tools used in accomplishing this task include an automated data base, a biostatistical and data analysis system, and a wide variety of mathematical simulation models of physiological systems. A keystone of this effort was the evaluation of physiological hypotheses using the simulation models and the prediction of the consequences of these hypotheses on many physiological quantities, some of which were not amenable to direct measurement. This approach led to improvements in the model, refinements of the hypotheses, a tentative integrated hypothesis for adaptation to weightlessness, and specific recommendations for new flight experiments.

  20. Methodological approaches for research of local market of IT services in the information space of region

    Directory of Open Access Journals (Sweden)

    I. V. Antokhonova

    2017-01-01

    Full Text Available In article the local market of services in the field of information technologies as a part of information space of the region including all institutional units and IT infrastructure of the region is considered. Such approach is caused by rather small contribution of this type of activity to a gross regional product in the conditions of economic crisis, reducing by the companies costs and freezing of large infrastructure projects. Development of information society is characterized by tendencies of promotion of mobile services, artificial intelligence, socialization, use of a cloud computing and is followed by involvement of households, state management and business in a common information space. However traditionally market research of IT services and information space represent the isolated researches of the market and humanitarian direction.The purpose of this article is development of a single technique and system of indicators for the analysis of the local market of IT services in information space of the region. Authors determinae components of information space in which there is a configuration of the local market, the demand and supply. Development of an analysis technique of the local market of IT services is preceded by the description of structure of information space of the region as the open system exchanging with the external environment information resources. Basic elements of structure are social and economic system in which the local market of IT services, ICT infrastructure and information resources functions. Information analysis base is the system of statistics which sources can be an official statistics, analytics of Internet resources and data of selective inspections of demand in the local market.The technique of a research represents an analysis algorithm of all components of information space and the market of IT services. For each analysis stage authors proved statistics for the solution of a certain task. At the final

  1. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling.

    Science.gov (United States)

    Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus

    2013-07-11

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our bi-scale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.

  2. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-01-05

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  3. Approaches to 30% Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S.; Beal, D.; Martin, E.; Fonorow, K.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the BA Program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. The scope of this report is to demonstrate achievement of these goals though the documentation of production-scale homes built cost-effectively at the community scale, and modeled to reduce whole-house energy use by 30% in the Hot Humid climate region. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  4. Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches.

    Science.gov (United States)

    Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand

    2017-03-20

    Large-scale public policy changes are often recommended to improve public health. Despite varying widely-from tobacco taxes to poverty-relief programs-such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches).

  5. [Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].

    Science.gov (United States)

    Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin

    2017-07-01

    In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.

  6. Ab interno approach to the subconjunctival space using a collagen glaucoma stent.

    Science.gov (United States)

    Lewis, Richard A

    2014-08-01

    This review considers a minimally invasive ab interno approach to glaucoma filtration surgery. Glaucoma filtration surgery can be defined as an attempt to lower intraocular pressure (IOP) by the surgical formation of an artificial drainage pathway from the anterior chamber to the subconjunctival space. Subconjunctival drainage of aqueous fluid has been a cornerstone of glaucoma surgery for more than a century. Varying techniques have been deployed to provide access to this space. Yet, despite numerous innovations in filtering surgery to achieve safe IOP reduction, too many short-term and long-term complications are associated with this surgery. This article describes the development of a new, soft, and permanent ab interno collagen implant (XEN gel stent) to optimize aqueous drainage to the subconjunctival space. Specific characteristics are critical in designing such an implant. Determining the optimum size of the device lumen to avoid hypotony while maximizing long-term outflow is crucial. Other topics discussed include material, length, diameter, flexibility, stability, and biocompatibility of the implant. Preclinical and human eye testing shows that the implant does not seem to occlude inside the lumen and the implant material does not appear to cause tissue reaction in the eye. The ab interno placement of the stent offers an alternative for lowering IOP with a minimally invasive procedure, minimum conjunctival tissue disruption, restricted flow to avoid hypotony, and long-term safety. Dr. Lewis received financial support from Aquesys, Inc. as a consultant. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  7. Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process

    Directory of Open Access Journals (Sweden)

    Jelena Djuriš

    2012-01-01

    Full Text Available The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors.

  8. Design space approach in optimization of fluid bed granulation and tablets compression process.

    Science.gov (United States)

    Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana

    2012-01-01

    The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors.

  9. Third International Scientific and Practical Conference «Space Travel is Approaching Reality» (Successful Event in Difficult Times

    Directory of Open Access Journals (Sweden)

    Matusevych Tetiana

    2015-02-01

    Full Text Available The article analyzes the presentations of participants of III International Scientific and Practical Conference «Space Travel – approaching reality», held on 6–7 November 2014 in Kharkiv, Ukraine

  10. A robust ballistic design approach for the Space Shuttle Advanced Solid Rocket Motor

    Science.gov (United States)

    Eagar, M. A.; Jordan, F. W.; Stockham, L. W.

    1993-06-01

    A robust design approach has been developed for the Space Shuttle Advanced Solid Rocket Motor (ASRM) that enhances the potential for program success. This is accomplished by application of state of the art ballistic modelling techniques coupled with an aggressive contingency planning methodology. Application of this approach addresses the design challenges associated with development of the ASRM because of it's large size, high length to diameter ratio, and demanding thrust-time trace shape requirements. Advanced ballistic modelling techniques applied include deformed grain modelling, spatial burn rate mapping, erosive burning response characterization, and ballistic/structural/CFD flow-grain interactions. Model fidelity is further improved by utilizing the extensive RSRM experience base for validation proposes. In addition to this modelling approach, development of contingency plans covers the remaining prediction uncertainties, and readily allows fine tuning of the propellant grain configuration to meet design objectives after the first motor firing. This approach promises to produce an optimum flight motor design that meets all performance objectives while accommodating program development uncertainties.

  11. Contaminant ingress into multizone buildings: An analytical state-space approach

    KAUST Repository

    Parker, Simon

    2013-08-13

    The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  12. Comparing laser interferometry and atom interferometry approaches to space-based gravitational-wave measurement

    Science.gov (United States)

    Ira Thorpe, James; Jennrich, Oliver; McNamara, Paul; Baker, John G.

    2012-07-01

    The science enabled by a space-based low-frequency gravitational-wave instrument is a high-priority objective of the international astronomy community. Mission concepts based on laser interferometry, such as the Laser Interferometer Space Antenna (LISA), have been thoroughly studied and determined to be capable of delivering significant science returns. Ongoing developments in laboratory atom interferometry techniques have inspired new gravitational-wave mission concepts. We present a comparative analysis of LISA-like light interferometer systems and atom interferometer systems for gravitational-wave detection. Specific attention is paid to the sources of instrumental noise that are most important for light interferometer systems. We find that the response to laser frequency noise is identical in light interferometer and atom interferometer systems and that similar mitigation strategies (e.g. multiple-arm interferometers) must be employed to reach interesting gravitational wave sensitivities. Response to acceleration of the optical platforms is slightly different, allowing smaller spacecraft separations in the atom interferometry approach, but the acceleration noise requirements are similar. Based on this analysis, we find no clear advantage of the atom interferometry approach over traditional laser interferometry.

  13. Mentoring SFRM: A New Approach to International Space Station Flight Controller Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2008-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (operator) to a basic level of effectiveness in 1 year. SFRM training uses a two-pronged approach to expediting operator certification: 1) imbed SFRM skills training into all operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills. Methods: A mentor works with an operator throughout the training flow. Inserted into the training flow are guided-discussion sessions and on-the-job observation opportunities focusing on specific SFRM skills, including: situational leadership, conflict management, stress management, cross-cultural awareness, self care and team care while on-console, communication, workload management, and situation awareness. The mentor and operator discuss the science and art behind the skills, cultural effects on skills applications, recognition of good and bad skills applications, recognition of how skills application changes subtly in different situations, and individual goals and techniques for improving skills. Discussion: This mentoring program provides an additional means of transferring SFRM knowledge compared to traditional CRM training programs. Our future endeavors in training SFRM skills (as well as other organization s) may benefit from adding team performance skills mentoring. This paper

  14. Linking biogeomorphic feedbacks from ecosystem engineer to landscape scale: a panarchy approach

    Science.gov (United States)

    Eichel, Jana

    2017-04-01

    Scale is a fundamental concept in both ecology and geomorphology. Therefore, scale-based approaches are a valuable tool to bridge the disciplines and improve the understanding of feedbacks between geomorphic processes, landforms, material and organisms and ecological processes in biogeomorphology. Yet, linkages between biogeomorphic feedbacks on different scales, e.g. between ecosystem engineering and landscape scale patterns and dynamics, are not well understood. A panarchy approach sensu Holling et al. (2002) can help to close this research gap and explain how structure and function are created in biogeomorphic ecosystems. Based on results from previous biogeomorphic research in Turtmann glacier foreland (Switzerland; Eichel, 2017; Eichel et al. 2013, 2016), a panarchy concept is presented for lateral moraine slope biogeomorphic ecosystems. It depicts biogeomorphic feedbacks on different spatiotemporal scales as a set of nested adaptive cycles and links them by 'remember' and 'revolt' connections. On a small scale (cm2 - m2; seconds to years), the life cycle of the ecosystem engineer Dryas octopetala L. is considered as an adaptive cycle. Biogeomorphic succession within patches created by geomorphic processes represents an intermediate scale adaptive cycle (m2 - ha, years to decades), while geomorphic and ecologic pattern development at a landscape scale (ha - km2, decades to centuries) can be illustrated by an adaptive cycle of ‚biogeomorphic patch dynamics' (Eichel, 2017). In the panarchy, revolt connections link the smaller scale adaptive cycles to larger scale cycles: on lateral moraine slopes, the development of ecosystem engineer biomass and cover controls the engineering threshold of the biogeomorphic feedback window (Eichel et al., 2016) and therefore the onset of the biogeomorphic phase during biogeomorphic succession. In this phase, engineer patches and biogeomorphic structures can be created in the patch mosaic of the landscape. Remember connections

  15. Novel fine-scale aerial mapping approach quantifies grassland weed cover dynamics and response to management.

    Science.gov (United States)

    Malmstrom, Carolyn M; Butterfield, H Scott; Planck, Laura; Long, Christopher W; Eviner, Valerie T

    2017-01-01

    Invasive weeds threaten the biodiversity and forage productivity of grasslands worldwide. However, management of these weeds is constrained by the practical difficulty of detecting small-scale infestations across large landscapes and by limits in understanding of landscape-scale invasion dynamics, including mechanisms that enable patches to expand, contract, or remain stable. While high-end hyperspectral remote sensing systems can effectively map vegetation cover, these systems are currently too costly and limited in availability for most land managers. We demonstrate application of a more accessible and cost-effective remote sensing approach, based on simple aerial imagery, for quantifying weed cover dynamics over time. In California annual grasslands, the target communities of interest include invasive weedy grasses (Aegilops triuncialis and Elymus caput-medusae) and desirable forage grass species (primarily Avena spp. and Bromus spp.). Detecting invasion of annual grasses into an annual-dominated community is particularly challenging, but we were able to consistently characterize these two communities based on their phenological differences in peak growth and senescence using maximum likelihood supervised classification of imagery acquired twice per year (in mid- and end-of season). This approach permitted us to map weed-dominated cover at a 1-m scale (correctly detecting 93% of weed patches across the landscape) and to evaluate weed cover change over time. We found that weed cover was more pervasive and persistent in management units that had no significant grazing for several years than in those that were grazed, whereas forage cover was more abundant and stable in the grazed units. This application demonstrates the power of this method for assessing fine-scale vegetation transitions across heterogeneous landscapes. It thus provides means for small-scale early detection of invasive species and for testing fundamental questions about landscape dynamics.

  16. Task-space separation principle: a force-field approach to motion planning for redundant manipulators.

    Science.gov (United States)

    Tommasino, Paolo; Campolo, Domenico

    2017-02-03

    In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.

  17. Extending peripersonal space representation without tool-use: evidence from a combined behavioural-computational approach

    Directory of Open Access Journals (Sweden)

    Andrea eSerino

    2015-02-01

    Full Text Available Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e. peripersonal space (PPS. PPS dynamically modifies depending on experience, e.g. it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioural approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e. selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioural experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioural settings showed that asynchronous tactile and auditory inputs did not change PPS. We conclude by proposing a biological-plausible model to explain plasticity in PPS representation after tool-use, supported by computational and behavioural data.

  18. Some applications of nanometer scale structures for current and future X-ray space research

    DEFF Research Database (Denmark)

    Christensen, Finn Erland; Abdali, S; Frederiksen, P K

    1994-01-01

    Institute in collaboration with the FOM Institute for Plasma Physics, Nieuwegein, the Max-Planck-Institut für Extraterrestrische Physik, Aussenstelle Berlin, the Space Research Institute, Russian Academy of Sciences, the Smithsonian Astrophysical Observatory, Ovonics Synthetic Materials Company and Lawrence...

  19. A Multidimensional Scaling Analysis of Own- and Cross-Race Face Spaces

    Science.gov (United States)

    Papesh, Megan H.; Goldinger, Stephen D.

    2010-01-01

    We examined predictions derived from Valentine's (1991) Multidimensional Space (MDS) framework for own- and other-race face processing. A set of 20 computerized faces was generated from a single prototype. Each face was saved as Black and White, changing only skin tone, such that structurally identical faces were represented in both race…

  20. Small-Scale Design Experiments as Working Space for Larger Mobile Communication Challenges

    Science.gov (United States)

    Lowe, Sarah; Stuedahl, Dagny

    2014-01-01

    In this paper, a design experiment using Instagram as a cultural probe is submitted as a method for analyzing the challenges that arise when considering the implementation of social media within a distributed communication space. It outlines how small, iterative investigations can reveal deeper research questions relevant to the education of…

  1. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    Energy Technology Data Exchange (ETDEWEB)

    Patchett, John M [Los Alamos National Laboratory; Ahrens, James P [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Browniee, Carson S [Los Alamos National Laboratory; Mitchell, Christopher J [Los Alamos National Laboratory; Hansen, Chuck [UNIV OF UTAH

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We present a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.

  2. Metaheuristic approaches to solving large-scale Bilevel Uncapacitated Facility Location Problem with clients' preferences

    Directory of Open Access Journals (Sweden)

    Marić Miroslav

    2015-01-01

    Full Text Available In this study, we consider a variant of the Bilevel Uncapacitated Facility Location Problem (BLUFLP, in which the clients choose suppliers based on their own preferences. We propose and compare three metaheuristic approaches for solving this problem: Particle Swarm Optimization (PSO, Simulated Annealing (SA, and a combination of Reduced and Basic Variable Neighborhood Search Method (VNS. We used the representation of solutions and objective function calculation that are adequate for all three proposed methods. Additional strategy is implemented in order to provide significant time savings when evaluating small changes of solution's code in improvement parts. Constructive elements of each of the proposed algorithms are adapted to the problem under consideration. The results of broad computational tests on modified problem instances from the literature show good performance of all three proposed methods, even on large problem dimensions. However, the obtained results indicate that the proposed VNS-based has significantly better performance compared to SA and PSO approaches, especially when solving large-scale problem instances. Computational experiments on large scale benchmarks demonstrate that the VNS-based method is fast, competitive, and able to find high-quality solutions, even for large-scale problem instances with up to 2000 clients and 2000 potential facilities within reasonable CPU times.

  3. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  4. A multi-scale approach to quantifying non-rainfall water inputs

    Science.gov (United States)

    Agam, Nurit; Florentin, Anat

    2015-04-01

    Non-rainfall water inputs (NRWIs) are a gain of water to the surface soil layer caused by sources other than rainfall, i.e., by fog deposition, dew formation, or water vapor adsorption. These water inputs usually evaporate the following morning, creating a diurnal cycle of water content in the uppermost soil layer, which involves exchange of latent-heat flux (LE) between the soil and the atmosphere. The significance of the formation and evaporation of NRWIs in drylands is largely acknowledged, yet understanding of the environmental conditions controlling its magnitude are still lacking, and its spatial extent was not studied before. A multi-scale approach to quantifying NWRIs and the corresponding diurnal water cycle in arid regions will be presented. The research has been conducted over a bare loess soil in the Negev desert (30o51'35.30" N, 34o46'40.97" E) during the dry season (May-September 2014). During this dry period, gain in soil water content is only a result of NRWIs. A micro-lysimeter (ML) with a 20 cm diameter and 50 cm depth filled with an undisturbed soil sample was placed on a scale buried in the soil such that the top end of the sample was level with the soil surface and the sample's mass was continuously monitored. The ML served as a point measurement to which larger-scale micrometeorological methods, i.e., eddy covariance (EC) flux tower (field scale, ~2X103 m2) and a surface layer scintillometer (field scale, ~8X103 m2). The ability to obtain spatially distributed NWRIs at the regional scale through mapping changes in land surface emissivity was tested as well. Preliminary results indicate that despite the acknowledged limitations in nighttime measurements, the EC LE followed closely the micro-lysimeter LE; and the sensible heat flux derived by the EC and the scintillometer were in good agreement; demonstrating the feasibility of measuring NRWIs with both methods. This innovative multi-scale approach sheds light on various aspects of the NRWI

  5. Spaces

    Directory of Open Access Journals (Sweden)

    Maziar Nekovee

    2010-01-01

    Full Text Available Cognitive radio is being intensively researched as the enabling technology for license-exempt access to the so-called TV White Spaces (TVWS, large portions of spectrum in the UHF/VHF bands which become available on a geographical basis after digital switchover. Both in the US, and more recently, in the UK the regulators have given conditional endorsement to this new mode of access. This paper reviews the state-of-the-art in technology, regulation, and standardisation of cognitive access to TVWS. It examines the spectrum opportunity and commercial use cases associated with this form of secondary access.

  6. Digital Cellular Solid Pressure Vessels: A Novel Approach for Human Habitation in Space

    Science.gov (United States)

    Cellucci, Daniel; Jenett, Benjamin; Cheung, Kenneth C.

    2017-01-01

    It is widely assumed that human exploration beyond Earth's orbit will require vehicles capable of providing long duration habitats that simulate an Earth-like environment - consistent artificial gravity, breathable atmosphere, and sufficient living space- while requiring the minimum possible launch mass. This paper examines how the qualities of digital cellular solids - high-performance, repairability, reconfigurability, tunable mechanical response - allow the accomplishment of long-duration habitat objectives at a fraction of the mass required for traditional structural technologies. To illustrate the impact digital cellular solids could make as a replacement to conventional habitat subsystems, we compare recent proposed deep space habitat structural systems with a digital cellular solids pressure vessel design that consists of a carbon fiber reinforced polymer (CFRP) digital cellular solid cylindrical framework that is lined with an ultra-high molecular weight polyethylene (UHMWPE) skin. We use the analytical treatment of a linear specific modulus scaling cellular solid to find the minimum mass pressure vessel for a structure and find that, for equivalent habitable volume and appropriate safety factors, the use of digital cellular solids provides clear methods for producing structures that are not only repairable and reconfigurable, but also higher performance than their conventionally manufactured counterparts.

  7. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  8. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    Science.gov (United States)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  9. A multi-scale spatial approach to address environmental effects of small hydropower development.

    Science.gov (United States)

    McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C

    2015-01-01

    Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.

  10. Evaluating the importance of climate input and calculation approach for estimating global scale potential evaporation

    Science.gov (United States)

    Müller Schmied, Hannes; Möller, Melanie; Döll, Petra

    2015-04-01

    The concept of potential evapotranspiration (PET) is used widely in hydrological modeling to assess the maximum water demand of the atmosphere from the soil-plant continuum for the subsequent estimation of actual evapotranspiration. The calculation is based on meteorological data like radiation, temperature and wind speed, and there are a large number of proposed equations for the computation of PET. The uncertainty of estimated spatially distributed PET at the global scale is high due to the uncertain climate input and the PET calculation approach. Therefore, we evaluated an ensemble of simulated PET estimates that was derived by combining four state-of-the-art radiation (plus other climate) data sets as well as four calculation approaches that are applicable for modeling impacts of climate change at the global scale. This presentation i) identifies optimal radiation input estimates by comparison to BSRN and GEBA stations, ii) identifies an optimal combination of climate input and PET model by comparison to pan evaporation data, and determines iii) where on the global land surface which type of uncertainty dominates total uncertainty of the PET ensemble - either climate input uncertainty or PET calculation approach uncertainty. The global water availability and use model WaterGAP is used for the analysis.

  11. Comparison for Chinese subordinates as a motivation approach: Scale Development and Psychometric Properties

    Directory of Open Access Journals (Sweden)

    Jingjing Ge

    2015-11-01

    Full Text Available Purpose: Chinese people are motivated by social comparison and temporal comparison. Based on this finding, we conceptualized lateral comparison and vertical comparison as two distinct constructs that represent individual self-enhancement toward the nature of social comparison with others and temporal comparison with self over time. We hypothesized that as stable individual psychological difference, lateral comparison and vertical comparison would have differential effects on people’s working behavior in the Chinese organizational context. Design/methodology/approach: Based on a conceptualization approach to Chinese management research, we conducted three studies to develop and validate a two-factor comparison scale which includes three-item lateral comparison and a three-item vertical comparison. Findings: Results from qualitative data in Study 1 provide evidence of convergent and discriminate validity of the scale, while Study 2 demonstrates the scale’s predictive validity. Furthermore, in Study two, a field survey in multiple Chinese organizations showed that lateral comparison and vertical comparison had differential effects on employee task performance and organizational citizenship behavior. Research implications: The theoretical and practical implications of this study are discussed in the working context in Chinese organizations and beyond. Originality/value: This finding integrates insights from previous research in social comparison and temporal comparison into a motivation approach that supervisors use toward subordinates in the Chinese organizational context.

  12. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  13. Distance to the scaling law: a useful approach for unveiling relationships between crime and urban metrics

    CERN Document Server

    Alves, Luiz G A; Lenzi, Ervin K; Mendes, Renio S

    2013-01-01

    We report on a quantitative analysis of relationships between the number of homicides, population size and other ten urban metrics. By using data from Brazilian cities, we show that well defined average scaling laws with the population size emerge when investigating the relations between population and number of homicides as well as population and urban metrics. We also show that the fluctuations around the scaling laws are log-normally distributed, which enabled us to model these scaling laws by a stochastic-like equation driven by a multiplicative and log-normally distributed noise. Because of the scaling laws, we argue that it is better to employ logarithms in order to describe the number of homicides in function of the urban metrics via regression analysis. In addition to the regression analysis, we propose an approach to correlate crime and urban metrics via the evaluation of the distance between the actual value of the number of homicides (as well as the value of the urban metrics) and the value that is...

  14. A multi-scaled approach to evaluating the fish assemblage structure within southern Appalachian streams USA.

    Science.gov (United States)

    Kirsch, Joseph; Peterson, James T.

    2014-01-01

    There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.

  15. An approach to ground based space surveillance of geostationary on-orbit servicing operations

    Science.gov (United States)

    Scott, Robert (Lauchie); Ellery, Alex

    2015-07-01

    On Orbit Servicing (OOS) is a class of dual-use robotic space missions that could potentially extend the life of orbiting satellites by fuel replenishment, repair, inspection, orbital maintenance or satellite repurposing, and possibly reduce the rate of space debris generation. OOS performed in geostationary orbit poses a unique challenge for the optical space surveillance community. Both satellites would be performing proximity operations in tight formation flight with separations less than 500 m making atmospheric seeing (turbulence) a challenge to resolving a geostationary satellite pair when viewed from the ground. The two objects would appear merged in an image as the resolving power of the telescope and detector, coupled with atmospheric seeing, limits the ability to resolve the two objects. This poses an issue for obtaining orbital data for conjunction flight safety or, in matters pertaining to space security, inferring the intent and trajectory of an unexpected object perched very close to one's satellite asset on orbit. In order to overcome this problem speckle interferometry using a cross spectrum approach is examined as a means to optically resolve the client and servicer's relative positions to enable a means to perform relative orbit determination of the two spacecraft. This paper explores cases where client and servicing satellites are in unforced relative motion flight and examines the observability of the objects. Tools are described that exploit cross-spectrum speckle interferometry to (1) determine the presence of a secondary in the vicinity of the client satellite and (2) estimate the servicing satellite's motion relative to the client. Experimental observations performed with the Mont Mégantic 1.6 m telescope on co-located geostationary satellites (acting as OOS proxy objects) are described. Apparent angular separations between Anik G1 and Anik F1R from 5 to 1 arcsec were observed as the two satellites appeared to graze one another. Data

  16. Approaches for processing spectral measurements of reflected sunlight for space object detection and identification

    Science.gov (United States)

    Cauquy, Marie-Astrid A.

    2004-09-01

    The proliferation of small, lightweight, ‘micro- ’ and ‘nanosatellite’ (largest dimension systems. Moreover, some satellites in geo-orbit are just simply too distant to be resolved by ground systems. The core concept of using Non-Imaging Measurements (NIM) to gather information about these objects comes from the fact that after reflection on a satellite surface, the reflected light contains information about the surface materials of the satellite. This approach of using NIM for satellite evaluation is getting new attention. In this dissertation, the accuracy of using these spectral measurements to match an unknown spectrum to a database containing known spectra and to estimate the fractional composition for materials contained in a synthetic spectrum is discussed. This problem is divided into two parts, a pattern recognition problem and a spectral unmixing problem. Two methods were developed for the pattern recognition problem. The first approach is a distance classifier processing different input features. The second method is an artificial neural network designed to process central moments of real measured spectra. This spectrum database is the Spica database provided by the Maui Space Surveillance Site (MSSS), Hawaii USA and consists in spectra from more than 100 different satellites. For the spectral unmixing part, four different approaches were tested. These approaches are based on the ability of spectral signal processing to estimate fractional composition of materials from the measurement of a single spectrum. Material spectra were provided by the NASA Johnson Space Center (JSC) to create synthetic spectra. A statistical approach based on the Expectation Maximization (EM) algorithm as well as a constrained linear estimator were used to estimate fractional compositions and presence of materials in a synthetic spectrum. The last two unmixing methods are based on inverse matrices, singular value decomposition and constrained pseudoinverse. The results

  17. FOREWORD: Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach

    Science.gov (United States)

    Emmerich, H.

    2009-11-01

    Scope and aim of this volume. Nucleation and initial microstructure formation play an important role in almost all aspects of materials science [1-5]. The relevance of the prediction and control of nucleation and the subsequent microstructure formation is fully accepted across many areas of modern surface and materials science and technology. One reason is that a large range of material properties, from mechanical ones such as ductility and hardness to electrical and magnetic ones such as electric conductivity and magnetic hardness, depend largely on the specific crystalline structure that forms in nucleation and the subsequent initial microstructure growth. A very demonstrative example for the latter is the so called bamboo structure of an integrated circuit, for which resistance against electromigration [6] , a parallel alignment of grain boundaries vertical to the direction of electricity, is most favorable. Despite the large relevance of predicting and controlling nucleation and the subsequent microstructure formation, and despite significant progress in the experimental analysis of the later stages of crystal growth in line with new theoretical computer simulation concepts [7], details about the initial stages of solidification are still far from being satisfactorily understood. This is in particular true when the nucleation event occurs as heterogenous nucleation. The Priority Program SPP 1296 'Heterogenous Nucleation and Microstructure Formation—a Scale- and System-Bridging Approach' [8] sponsored by the German Research Foundation, DFG, intends to contribute to this open issue via a six year research program that enables approximately twenty research groups in Germany to work interdisciplinarily together following this goal. Moreover, it enables the participants to embed themselves in the international community which focuses on this issue via internationally open joint workshops, conferences and summer schools. An outline of such activities can be found

  18. Scaling approach to quantum non-equilibrium dynamics of many-body systems

    Energy Technology Data Exchange (ETDEWEB)

    Gritsev, Vladimir; Barmettler, Peter [Physics Department, University of Fribourg, Chemin du Musee 3, 1700 Fribourg (Switzerland); Demler, Eugene, E-mail: vladimir.gritsev@unifr.c [Lyman Laboratory of Physics, Physics Department, Harvard University, 17 Oxford Street, Cambridge, MA 02138 (United States)

    2010-11-15

    Understanding the non-equilibrium quantum dynamics of many-body systems is one of the most challenging problems in modern theoretical physics. While numerous approximate and exact solutions exist for systems in equilibrium, examples of non-equilibrium dynamics of many-body systems that allow reliable theoretical analysis are few and far between. In this paper, we discuss a broad class of time-dependent interacting systems subject to external linear and parabolic potentials, for which the many-body Schroedinger equation can be solved using a scaling transformation. We demonstrate that scaling solutions exist for both local and non-local interactions, and derive appropriate self-consistency equations. We apply this approach to several specific experimentally relevant examples of interacting bosons in one and two dimensions. As an intriguing result, we find that weakly and strongly interacting Bose gases expanding from a parabolic trap can exhibit very similar dynamics.

  19. Nonlocal multi-scale traffic flow models: analysis beyond vector spaces

    Directory of Open Access Journals (Sweden)

    Peter E. Kloeden

    2016-08-01

    Full Text Available Abstract Realistic models of traffic flow are nonlinear and involve nonlocal effects in balance laws. Flow characteristics of different types of vehicles, such as cars and trucks, need to be described differently. Two alternatives are used here, $$L^p$$ L p -valued Lebesgue measurable density functions and signed Radon measures. The resulting solution spaces are metric spaces that do not have a linear structure, so the usual convenient methods of functional analysis are no longer applicable. Instead ideas from mutational analysis will be used, in particular the method of Euler compactness will be applied to establish the well-posedness of the nonlocal balance laws. This involves the concatenation of solutions of piecewise linear systems on successive time subintervals obtained by freezing the nonlinear nonlocal coefficients to their values at the start of each subinterval. Various compactness criteria lead to a convergent subsequence. Careful estimates of the linear systems are needed to implement this program.

  20. Prediction of free air space in initial composting mixtures by a statistical design approach.

    Science.gov (United States)

    Soares, Micaela A R; Quina, Margarida J; Quinta-Ferreira, Rosa

    2013-10-15

    Free air space (FAS) is a physical parameter that can play an important role in composting processes to maintain favourable aerobic conditions. Aiming to predict the FAS of initial composting mixtures, specific materials proportions ranged from 0 to 1 were tested for a case study comprising industrial potato peel, which is characterized by low air void volume, thus requiring additional components for its composting. The characterization and prediction of FAS for initial mixtures involving potato peel, grass clippings and rice husks (set A) or sawdust (set B) was accomplished by means of an augmented simplex-centroid mixture design approach. The experimental data were fitted to second order Scheffé polynomials. Synergistic or antagonistic effects of mixture proportions in the FAS response were identified from the surface and response trace plots in the FAS response. Moreover, a good agreement was achieved between the model predictions and supplementary experimental data. Moreover, theoretical and empirical approaches for estimating FAS available in literature were compared with the predictions generated by the mixture design approach. This study demonstrated that the mixture design methodology can be a valuable tool to predict the initial FAS of composting mixtures, specifically in making adjustments to improve composting processes containing primarily potato peel. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Modal parameter identification by an iterative approach and by the state space model

    Science.gov (United States)

    Lardiès, Joseph

    2017-10-01

    The problem of estimating a spectral representation of exponentially decaying signals from a set of sampled data is of considerable interest in several applications such as in vibration analysis of mechanical systems. In this paper we present a nonparametric and a parametric method for modal parameter identification of vibrating systems when only output data is available. The nonparametric method uses an iterative adaptive algorithm based in the formation of a two dimensional grid mesh, both in frequency and damping domains. We formulate the identification problem as an optimization problem where the signal energy is obtained from each frequency grid point and damping grid point. The modal parameters are then obtained by minimizing the signal energy from all grid points other than the grid point which contains the modal parameters of the system. The parametric approach uses the state space model and properties of the controllability matrix to obtain the state transition matrix which contains all modal information. We discuss and illustrate the benefits of the proposed algorithms using a numerical and two experimental tests and we conclude that the nonparametric approach is very time consuming when a large number of samples is considered and does not outperform the parametric approach.

  2. Numerical Approaches for the Optimization of Plasma Sources for Space Thrusters

    Science.gov (United States)

    Melazzi, Davide; Lancellotti, Vito; Cardinali, Alessandro; Manente, Marco; Pavarin, Daniele

    2013-09-01

    The optimization of radiofrequency magnetized plasma sources for space thrusters has focused on power deposition in nonuniform plasmas. However, many researchers assumed rather than computed the induced current density on the antenna, and considered a uniform and constant magneto-static field aligned with the source axis. To overcome these limitations, we propose two methods: (i) a full-wave approach to compute the current distribution on the antenna and (ii) a ray-tracing approach to investigate the influence of actual magneto-static fields on the wave propagation and power deposition. Plasma density profiles are included in both approaches. In the full-wave method, we derive a surface integral equation for the antenna and a volume integral equation for the plasma by applying the electromagnetic equivalence principles. A comparative study of different antennas will be presented. In the second method, the propagation and absorption of electromagnetic waves are investigated by solving the 3D Maxwell-Vlasov model equations by a WKB asymptotic expansion. Unconventional mode conversions and power deposition profiles are found when realistic confinement magnetic field are considered.

  3. Space base laser torque applied on LEO satellites of various geometries at satellite’s closest approach

    OpenAIRE

    Khalifa, N.S.

    2013-01-01

    In light of using laser power in space applications, the motivation of this paper is to use a space based solar pumped laser to produce a torque on LEO satellites of various shapes. It is assumed that there is a space station that fires laser beam toward the satellite so the beam spreading due to diffraction is considered to be the dominant effect on the laser beam propagation. The laser torque is calculated at the point of closest approach between the space station and some sun synchronous l...

  4. Automatic diatom identification using contour analysis by morphological curvature scale spaces

    NARCIS (Netherlands)

    Jalba, Andrei C.; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.; Bayer, Micha M.; Juggins, Stephen

    A method for automatic identification of diatoms (single-celled algae with silica shells) based on extraction of features on the contour of the cells by multi-scale mathematical morphology is presented. After extracting the contour of the cell, it is smoothed adaptively, encoded using Freeman chain

  5. Galactic bulges from Hubble Space Telescope NICMOS observations : Global scaling relations

    NARCIS (Netherlands)

    Balcells, Marc; Graham, Alister W.; Peletier, Reynier F.

    2007-01-01

    We investigate bulge and disk scaling relations using a volume-corrected sample of early-to intermediate-type disk galaxies in which, importantly, the biasing flux from additional nuclear components has been modeled and removed. Structural parameters are obtained from a seeing-convolved, bulge +

  6. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  7. Multi-level and multi-scale integrative approach to the understanding of human blastocyst implantation.

    Science.gov (United States)

    Sengupta, Jayasree; Ghosh, Debabrata

    2014-01-01

    Implantation is a complex process which results in fixation of zona pellucida free blastocyst to the maternal uterine endometrium. In the human, it involves progesterone mediated preparation of endometrium, age- and stage-matched development of pre-implantation embryo, and interaction between embryo and endometrium. In the present essay, we present the case to explain why there is a necessity of undertaking multi-level, multi-scale integrative approach to deconstruct the succession process of endometrial development to the climax of implantation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Burnout of pulverized biomass particles in large scale boiler – Single particle model approach

    DEFF Research Database (Denmark)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero

    2010-01-01

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where...... the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner...

  9. Serbian translation of the 20-item Toronto Alexithymia Scale: psychometric properties and the new methodological approach in translating scales.

    Science.gov (United States)

    Trajanović, Nikola N; Djurić, Vladimir; Latas, Milan; Milovanović, Srdjan; Jovanović, Aleksandar A; Djurić, Dusan

    2013-01-01

    Since inception of the alexithymia construct in 1970's, there has been a continuous effort to improve both its theoretical postulates and the clinical utility through development, standardization and validation of assessment scales. The aim of this study was to validate the Serbian translation of the 20-item Toronto Alexithymia Scale (TAS-20) and to propose a new method of translation of scales with a property of temporal stability. The scale was expertly translated by bilingual medical professionals and a linguist, and given to a sample of bilingual participants from the general population who completed both the English and the Serbian version of the scale one week apart. The findings showed that the Serbian version of the TAS-20 had a good internal consistency reliability regarding total scale (alpha=0.86), and acceptable reliability of the three factors (alpha=0.71-0.79). The analysis confirmed the validity and consistency of the Serbian translation of the scale, with observed weakness of the factorial structure consistent with studies in other languages. The results also showed that the method of utilizing a self-control bilingual subject is a useful alternative to the back-translation method, particularly in cases of linguistically and structurally sensitive scales, or in cases where a larger sample is not available. This method, dubbed as 'forth-translation' could be used to translate psychometric scales measuring properties which have temporal stability over the period of at least several weeks.

  10. Infinite-mode squeezed coherent states and non-equilibrium statistical mechanics (phase-space-picture approach)

    Science.gov (United States)

    Yeh, Leehwa

    1993-01-01

    The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite-mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena.

  11. Towards the Application of a River Management Approach Encompassing most Natural Process Drivers : Lessons Learned from Freedom Space for Rivers in Quebec (Canada)

    Science.gov (United States)

    Biron, P.; Buffin-Belanger, T. K.; Massé, S.

    2015-12-01

    The consensus around the need for a shift in river management approaches to include more natural processes is steadily growing amongst scientists, practitioners and governmental agencies. Not only is this a sound way to increase resilience of fluvial systems and adapt to climate change, but it will likely result in improved water quality and better aquatic habitat. This paper presents the freedom space for rivers approach which we have developed recently in Quebec (Canada) to combine natural processes related to mobility, flooding and riparian wetland connectivity into a single index. The approach was applied to 3 contrasted rivers (de la Roche, Yamaska Sud-Est and Matane) to produce two main levels of freedom space, operating at two time scales: "short" (in geomorphic terms, i.e. management approach, but also strong inertia and resistance to change, particularly in agricultural watersheds. Our observations reveal that for such a shift in paradigm to operate, hydrogeomorphogical concepts must be better understood by those in charge of managing rivers, which is currently not the case in Quebec. The role of integrating scientific knowledge in the implementation of a freedom space for rivers management scheme will be discussed based on case studies in 3 watersheds: rivière du Nord, Coaticook and Mitis/Neigette. The original approach was applied only to the main branches, however the second phase of the project is also aiming to determine whether the impact of leaving more space for natural rivers to operate would be more beneficial in headwater tributaries than in higher-order reaches.

  12. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Science.gov (United States)

    Ruan, Junhu; Wang, Xuping; Shi, Yan

    2014-01-01

    We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs) and assign medial aid points (MAPs) to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i) More TDCs often increase the efficiency and utility of medical supplies; (ii) It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii) The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is. PMID:25350005

  13. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  14. New approach to the evaluation of skin color of pigmentary lesions using Skin Tone Color Scale.

    Science.gov (United States)

    Konishi, Natsuko; Kawada, Akira; Morimoto, Yoshinobu; Watake, Asami; Matsuda, Hiromasa; Oiso, Naoki; Kawara, Shigeru

    2007-07-01

    Objective methods of measuring skin color are needed to evaluate pigmentary lesions quantitatively. We have developed a new method of measuring skin color using a plastic bar system called the Skin Tone Color Scale based on Munsell's color space system. We have also evaluated the effectiveness of various therapies using this measurement system. Our system was designed to measure skin color in normal skin, pigmentary lesions of solar lentigo, chloasma and ephelides, and postinflammatory pigmentation. Moreover, effectiveness of various therapies for these pigmentary lesions was evaluated. The evaluations made with this system were closely related to physician assessment. This method may be useful in measuring of skin color and evaluating the effectiveness of therapies for pigmentary diseases.

  15. High-Payoff Space Transportation Design Approach with a Technology Integration Strategy

    Science.gov (United States)

    McCleskey, C. M.; Rhodes, R. E.; Chen, T.; Robinson, J.

    2011-01-01

    A general architectural design sequence is described to create a highly efficient, operable, and supportable design that achieves an affordable, repeatable, and sustainable transportation function. The paper covers the following aspects of this approach in more detail: (1) vehicle architectural concept considerations (including important strategies for greater reusability); (2) vehicle element propulsion system packaging considerations; (3) vehicle element functional definition; (4) external ground servicing and access considerations; and, (5) simplified guidance, navigation, flight control and avionics communications considerations. Additionally, a technology integration strategy is forwarded that includes: (a) ground and flight test prior to production commitments; (b) parallel stage propellant storage, such as concentric-nested tanks; (c) high thrust, LOX-rich, LOX-cooled first stage earth-to-orbit main engine; (d) non-toxic, day-of-launch-loaded propellants for upper stages and in-space propulsion; (e) electric propulsion and aero stage control.

  16. A reproducing kernel hilbert space approach for q-ball imaging.

    Science.gov (United States)

    Kaden, Enrico; Kruggel, Frithjof

    2011-11-01

    Diffusion magnetic resonance (MR) imaging has enabled us to reveal the white matter geometry in the living human brain. The Q-ball technique is widely used nowadays to recover the orientational heterogeneity of the intra-voxel fiber architecture. This article proposes to employ the Funk-Radon transform in a Hilbert space with a reproducing kernel derived from the spherical Laplace-Beltrami operator, thus generalizing previous approaches that assume a bandlimited diffusion signal. The function estimation problem is solved within a Tikhonov regularization framework, while a Gaussian process model allows for the selection of the smoothing parameter and the specification of confidence bands. Shortcomings of Q-ball imaging are discussed.

  17. High-resolution spectral analysis of unevenly spaced data using a regularization approach

    Science.gov (United States)

    Seghouani, N.

    2017-07-01

    The problem of estimating the power spectrum of unevenly spaced time series is considered. Indeed, this problem is quite common in ground-based astronomy, and the use of classical methods leads to aliased frequency detection. Moreover, the spectral resolution is limited by the observation time (Shannon-Nyquist law). We propose a regularized method to estimate the power spectrum of such irregular time series. We also show that with this approach, one can achieve high resolution. The method is described in detail and applied to simulated data as well as helioseismic velocities. A fast algorithm is also presented for the numerical implementation. The results show that accurate frequency estimation can be obtained from short, noisy and irregularly sampled signals.

  18. Analytic Semigroup Approach to Generalized Navier-Stokes Flows in Besov Spaces

    Science.gov (United States)

    Chen, Zhi-Min

    2017-12-01

    The energy dissipation of the Navier-Stokes equations is controlled by the viscous force defined by the Laplacian -Δ , while that of the generalized Navier-Stokes equations is determined by the fractional Laplacian (-Δ )^α . The existence and uniqueness problem is always solvable in a strong dissipation situation in the sense of large α but it becomes complicated when α is decreasing. In this paper, the well-posedness regarding to the unique existence of small time solutions and small initial data solutions is examined in critical homogeneous Besov spaces for α ≥ 1/2. An analytic semigroup approach to the understanding of the generalized Navier-Stokes equations is developed and thus the well-posedness on the equations is examined in a manner different to earlier investigations.

  19. Solar pumping of solid state lasers for space mission: a novel approach

    Science.gov (United States)

    Boetti, N. G.; Lousteau, J.; Negro, D.; Mura, E.; Scarpignato, G. C.; Perrone, G.; Milanese, D.; Abrate, S.

    2017-11-01

    Solar pumped laser (SPL) can find wide applications in space missions, especially for long lasting ones. In this paper a new technological approach for the realization of a SPL based on fiber laser technology is proposed. We present a preliminary study, focused on the active material performance evaluation, towards the realization of a Nd3+ -doped fiber laser made of phosphate glass materials, emitting at 1.06 μm. For this research several Nd3+ -doped phosphate glass samples were fabricated, with concentration of Nd3+ up to 10 mol%. Physical and thermal properties of the glasses were measured and their spectroscopic properties are described. The effect of Nd3+ doping concentration on emission spectra and lifetimes was investigated in order to study the concentration quenching effect on luminescence performance.

  20. Groups, matrices, and vector spaces a group theoretic approach to linear algebra

    CERN Document Server

    Carrell, James B

    2017-01-01

    This unique text provides a geometric approach to group theory and linear algebra, bringing to light the interesting ways in which these subjects interact. Requiring few prerequisites beyond understanding the notion of a proof, the text aims to give students a strong foundation in both geometry and algebra. Starting with preliminaries (relations, elementary combinatorics, and induction), the book then proceeds to the core topics: the elements of the theory of groups and fields (Lagrange's Theorem, cosets, the complex numbers and the prime fields), matrix theory and matrix groups, determinants, vector spaces, linear mappings, eigentheory and diagonalization, Jordan decomposition and normal form, normal matrices, and quadratic forms. The final two chapters consist of a more intensive look at group theory, emphasizing orbit stabilizer methods, and an introduction to linear algebraic groups, which enriches the notion of a matrix group. Applications involving symm etry groups, determinants, linear coding theory ...

  1. Truncated Hilbert Space Approach for the 1+1D phi^4 Theory

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    (an informal seminar, not a regular string seminar) We used the massive analogue of the truncated conformal space approach to study the broken phase of the 1+1 dimensional scalar phi^4 model in finite volume, similarly to the work by S. Rychkov and L. Vitale. In our work, the finite size spectrum was determined numerically using an effective eigensolver routine, which was followed by a simple extrapolation in the cutoff energy. We analyzed both the periodic and antiperiodic sectors. The results were compared with semiclassical and Bethe-Yang results as well as perturbation theory. We obtained the coupling dependence of the infinite volume breather and kink masses for moderate couplings. The results fit well with semiclassics and perturbative estimations, and confirm the conjecture of Mussardo that at most two neutral excitations can exist in the spectrum. We believe that improving our method with the renormalization procedure of Rychkov et al. enables to measure further interesting quantities such as decay ra...

  2. Robust Speed Control of a Doubly Fed Induction Motor using State-Space Nonlinear Approach

    Directory of Open Access Journals (Sweden)

    Tarik MOHAMMED CHIKOUCHE

    2013-06-01

    Full Text Available This paper presents a comparison between two controllers (fuzzy logic and variable gain PI of the one part and the conventional PI on the other hand, used for speed control with indirect rotor flux orientation of doubly fed Induction Motor (DFIM fed by two PWM inverters with separate DC bus link. By introducing a new approach for decoupling the motor’s currents in a rotating (d-q frame, based on the state space input-output decoupling method, we obtain the same transfer function (1/s for all four decoupled currents. Thereafter and in order to improve the performances of the machine’s control, the VPGI and fuzzy logic controllers with five subsets were used for the regulation speed. The Results obtained in Matlab/Simulink environment show well the effectiveness of the technique employed for the decoupling and the speed regulation of the machine.

  3. A multifractal approach to space-filling recovery for PET quantification

    Energy Technology Data Exchange (ETDEWEB)

    Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O. [Comprehensive Cancer Imaging Centre, Imperial College London, Hammersmith Hospital, London W12 0NN (United Kingdom); Tsoumpas, Charalampos [Division of Medical Physics, University of Leeds, LS2 9JT (United Kingdom); Turkheimer, Federico E. [Department of Neuroimaging, Institute of Psychiatry, King’s College London, London SE5 8AF (United Kingdom)

    2014-11-01

    Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.

  4. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org).

  5. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  6. Real-space protein-model completion: an inverse-kinematics approach.

    Science.gov (United States)

    van den Bedem, Henry; Lotan, Itay; Latombe, Jean Claude; Deacon, Ashley M

    2005-01-01

    Rapid protein-structure determination relies greatly on software that can automatically build a protein model into an experimental electron-density map. In favorable circumstances, various software systems are capable of building over 90% of the final model. However, completeness falls off rapidly with the resolution of the diffraction data. Manual completion of these partial models is usually feasible, but is time-consuming and prone to subjective interpretation. Except for the N- and C-termini of the chain, the end points of each missing fragment are known from the initial model. Hence, fitting fragments reduces to an inverse-kinematics problem. A method has been developed that combines fast inverse-kinematics algorithms with a real-space torsion-angle refinement procedure in a two-stage approach to fit missing main-chain fragments into the electron density between two anchor points. The first stage samples a large number of closing conformations, guided by the electron density. These candidates are ranked according to density fit. In a subsequent refinement stage, optimization steps are projected onto a carefully chosen subspace of conformation space to preserve rigid geometry and closure. Experimental results show that fitted fragments are in excellent agreement with the final refined structure for lengths of up to 12-15 residues in areas of weak or ambiguous electron density, even at medium to low resolution.

  7. A new approach for the evaluation of the effective electrode spacing in spherical ion chambers

    Energy Technology Data Exchange (ETDEWEB)

    Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Ionizing Radiation Metrology Laboratory, Tersa Street 12211, Giza P.O. Box: 136 (Egypt); Shqair, Mohammed [Physics Department, Faculty of Science and Humanities, Sattam Bin Abdul Aziz University, Alkharj (Saudi Arabia)

    2016-10-21

    Proper determination of the effective electrode spacing (d{sub eff}) of an ion chamber ensures proper determination of its collection efficiency either in continuous or in pulsed radiation in addition to the proper evaluation of the transit time. Boag's method for the determination of d{sub eff} assumes the spherical shape of the internal electrode of the spherical ion chambers which is not always true, except for some cases, its common shape is cylindrical. Current work provides a new approach for the evaluation of the effective electrode spacing in spherical ion chambers considering the cylindrical shape of the internal electrode. Results indicated that d{sub eff} values obtained through current work are less than those obtained using Boag's method by factors ranging from 12.1% to 26.9%. Current method also impacts the numerically evaluated collection efficiency (f) where values obtained differ by factors up to 3% at low potential (V) values while at high V values minor differences were noticed. Additionally, impacts on the evaluation of the transit time (τ{sub i}) were obtained. It is concluded that approximating the internal electrode as a sphere may result in false values of d{sub eff}, f, and τ{sub i}.

  8. Krylov-space approach to the equilibrium and nonequilibrium single-particle Green's function.

    Science.gov (United States)

    Balzer, Matthias; Gdaniec, Nadine; Potthoff, Michael

    2012-01-25

    The zero-temperature single-particle Green's function of correlated fermion models with moderately large Hilbert-space dimensions can be calculated by means of Krylov-space techniques. The conventional Lanczos approach consists of finding the ground state in a first step, followed by an approximation for the resolvent of the Hamiltonian in a second step. We analyze the character of this approximation and discuss a numerically exact variant of the Lanczos method which is formulated in the time domain. This method is extended to obtain the nonequilibrium single-particle Green's function defined on the Keldysh-Matsubara contour in the complex time plane which describes the system's nonperturbative response to a sudden parameter switch in the Hamiltonian. The proposed method will be important as an exact-diagonalization solver in the context of self-consistent or variational cluster-embedding schemes. For the recently developed nonequilibrium cluster-perturbation theory, we discuss its efficient implementation and demonstrate the feasibility of the Krylov-based solver. The dissipation of a strong local magnetic excitation into a non-interacting bath is considered as an example for applications.

  9. Modeling solvation effects in real-space and real-time within density functional approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, Alain [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy); Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana (Cuba); Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy)

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  10. Dynamic analysis of heartbeat rate signals of epileptics using multidimensional phase space reconstruction approach

    Science.gov (United States)

    Su, Zhi-Yuan; Wu, Tzuyin; Yang, Po-Hua; Wang, Yeng-Tseng

    2008-04-01

    The heartbeat rate signal provides an invaluable means of assessing the sympathetic-parasympathetic balance of the human autonomic nervous system and thus represents an ideal diagnostic mechanism for detecting a variety of disorders such as epilepsy, cardiac disease and so forth. The current study analyses the dynamics of the heartbeat rate signal of known epilepsy sufferers in order to obtain a detailed understanding of the heart rate pattern during a seizure event. In the proposed approach, the ECG signals are converted into heartbeat rate signals and the embedology theorem is then used to construct the corresponding multidimensional phase space. The dynamics of the heartbeat rate signal are then analyzed before, during and after an epileptic seizure by examining the maximum Lyapunov exponent and the correlation dimension of the attractors in the reconstructed phase space. In general, the results reveal that the heartbeat rate signal transits from an aperiodic, highly-complex behaviour before an epileptic seizure to a low dimensional chaotic motion during the seizure event. Following the seizure, the signal trajectories return to a highly-complex state, and the complex signal patterns associated with normal physiological conditions reappear.

  11. Ethical approach to digital skills. Sense and use in virtual educational spaces

    Directory of Open Access Journals (Sweden)

    Juan GARCÍA-GUTIÉRREZ

    2013-12-01

    Full Text Available In the context of technology and cyberspace, should we do everything we can do? The answer given to this question is not ethical, is political: safety. The safety and security are overshadowing the ethical question about the meaning of technology. Cyberspace imposes a "new logic" and new forms of "ownership". When it comes to the Internet in relation to children not always adopt logic of accountability to the cyberspace, Internet showing a space not only ethical and technical. We talk about safe Internet, Internet healthy, and Internet Fit for Children... why not talk over Internet ethics? With this work we approach digital skills as those skills that help us to position ourselves and guide us in cyberspace. Something that is not possible without also ethical skills. So, in this article we will try to build and propose a model for analyzing the virtual learning spaces (and cyberspace in general based on the categories of "use" and "sense" as different levels of ownership that indicate the types of competences needed to access cyberspace.  

  12. A New Approach to Hausdorff Space Theory via the Soft Sets

    Directory of Open Access Journals (Sweden)

    Güzide Şenel

    2016-01-01

    Full Text Available The aim of this paper is to present the concept of soft bitopological Hausdorff space (SBT Hausdorff space as an original study. Firstly, I introduce some new concepts in soft bitopological space such as SBT point, SBT continuous function, and SBT homeomorphism. Secondly, I define SBT Hausdorff space. I analyse whether a SBT space is Hausdorff or not by SBT homeomorphism defined from a SBT Hausdorff space to researched SBT space. I end my study by defining SBT property and hereditary SBT by SBT homeomorphism and investigate the relations between SBT space and SBT subspace.

  13. A Conditional Fourier-Feynman Transform and Conditional Convolution Product with Change of Scales on a Function Space II

    Directory of Open Access Journals (Sweden)

    Dong Hyun Cho

    2017-01-01

    Full Text Available Using a simple formula for conditional expectations over continuous paths, we will evaluate conditional expectations which are types of analytic conditional Fourier-Feynman transforms and conditional convolution products of generalized cylinder functions and the functions in a Banach algebra which is the space of generalized Fourier transforms of the measures on the Borel class of L2[0,T]. We will then investigate their relationships. Particularly, we prove that the conditional transform of the conditional convolution product can be expressed by the product of the conditional transforms of each function. Finally we will establish change of scale formulas for the conditional transforms and the conditional convolution products. In these evaluation formulas and change of scale formulas, we use multivariate normal distributions so that the conditioning function does not contain present positions of the paths.

  14. Quantum harmonic Brownian motion in a general environment: A modified phase-space approach

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Leehwa [Univ. of California, Berkeley, CA (United States). Dept. of Physics

    1993-06-23

    After extensive investigations over three decades, the linear-coupling model and its equivalents have become the standard microscopic models for quantum harmonic Brownian motion, in which a harmonically bound Brownian particle is coupled to a quantum dissipative heat bath of general type modeled by infinitely many harmonic oscillators. The dynamics of these models have been studied by many authors using the quantum Langevin equation, the path-integral approach, quasi-probability distribution functions (e.g., the Wigner function), etc. However, the quantum Langevin equation is only applicable to some special problems, while other approaches all involve complicated calculations due to the inevitable reduction (i.e., contraction) operation for ignoring/eliminating the degrees of freedom of the heat bath. In this dissertation, the author proposes an improved methodology via a modified phase-space approach which employs the characteristic function (the symplectic Fourier transform of the Wigner function) as the representative of the density operator. This representative is claimed to be the most natural one for performing the reduction, not only because of its simplicity but also because of its manifestation of geometric meaning. Accordingly, it is particularly convenient for studying the time evolution of the Brownian particle with an arbitrary initial state. The power of this characteristic function is illuminated through a detailed study of several physically interesting problems, including the environment-induced damping of quantum interference, the exact quantum Fokker-Planck equations, and the relaxation of non-factorizable initial states. All derivations and calculations axe shown to be much simplified in comparison with other approaches. In addition to dynamical problems, a novel derivation of the fluctuation-dissipation theorem which is valid for all quantum linear systems is presented.

  15. Design Space Approach for Preservative System Optimization of an Anti-Aging Eye Fluid Emulsion.

    Science.gov (United States)

    Lourenço, Felipe Rebello; Francisco, Fabiane Lacerda; Ferreira, Márcia Regina Spuri; Andreoli, Terezinha De Jesus; Löbenberg, Raimar; Bou-Chacra, Nádia

    2015-01-01

    The use of preservatives must be optimized in order to ensure the efficacy of an antimicrobial system as well as the product safety. Despite the wide variety of preservatives, the synergistic or antagonistic effects of their combinations are not well established and it is still an issue in the development of pharmaceutical and cosmetic products. The purpose of this paper was to establish a space design using a simplex-centroid approach to achieve the lowest effective concentration of 3 preservatives (methylparaben, propylparaben, and imidazolidinyl urea) and EDTA for an emulsion cosmetic product. Twenty-two formulae of emulsion differing only by imidazolidinyl urea (A: 0.00 to 0.30% w/w), methylparaben (B: 0.00 to 0.20% w/w), propylparaben (C: 0.00 to 0.10% w/w) and EDTA (D: 0.00 to 0.10% w/w) concentrations were prepared. They were tested alone and in binary, ternary and quaternary combinations. Aliquots of these formulae were inoculated with several microorganisms. An electrochemical method was used to determine microbial burden immediately after inoculation and after 2, 4, 8, 12, 24, 48, and 168 h. An optimization strategy was used to obtain the concentrations of preservatives and EDTA resulting in a most effective preservative system of all microorganisms simultaneously. The use of preservatives and EDTA in combination has the advantage of exhibiting a potential synergistic effect against a wider spectrum of microorganisms. Based on graphic and optimization strategies, we proposed a new formula containing a quaternary combination (A: 55%; B: 30%; C: 5% and D: 10% w/w), which complies with the specification of a conventional challenge test. A design space approach was successfully employed in the optimization of concentrations of preservatives and EDTA in an emulsion cosmetic product.

  16. Combining participatory and socioeconomic approaches to map fishing effort in small-scale fisheries.

    Directory of Open Access Journals (Sweden)

    Lauric Thiault

    Full Text Available Mapping the spatial allocation of fishing effort while including key stakeholders in the decision making process is essential for effective fisheries management but is difficult to implement in complex small-scale fisheries that are diffuse, informal and multifaceted. Here we present a standardized but flexible approach that combines participatory mapping approaches (fishers' spatial preference for fishing grounds, or fishing suitability with socioeconomic approaches (spatial extrapolation of social surrogates, or fishing capacity to generate a comprehensive map of predicted fishing effort. Using a real world case study, in Moorea, French Polynesia, we showed that high predicted fishing effort is not simply located in front of, or close to, main fishing villages with high dependence on marine resources; it also occurs where resource dependency is moderate and generally in near-shore areas and reef passages. The integrated approach we developed can contribute to addressing the recurrent lack of fishing effort spatial data through key stakeholders' (i.e., resource users participation. It can be tailored to a wide range of social, ecological and data availability contexts, and should help improve place-based management of natural resources.

  17. Evaluation of wind-induced internal pressure in low-rise buildings: A multi scale experimental and numerical approach

    Science.gov (United States)

    Tecle, Amanuel Sebhatu

    Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30--40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30--40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet

  18. Rotating Space Elevator: Classical and Statistical Mechanics of cosmic scale spinning strings

    Science.gov (United States)

    Knudsen, Steven; Golubovic, Leonardo

    2009-03-01

    We introduce a novel and unique nonlinear dynamical system, the Rotating Space Elevator (RSE). The RSE is a multiply rotating system of cables (strings) reaching beyond the Earth geo-synchronous satellite orbit. Strikingly, objects sliding along the RSE cable do not require internal engines or propulsion to be transported far away from the Earth's surface. The RSE action employs, in a very fundamental way, basic natural phenomena -- gravitation and inertial forces. The RSE exhibits interesting nonlinear dynamics and statistical physics phenomena. Its kinetic phase diagram involves both chaotic and quasi-periodic states of motion separated by a morphological phase transition that occurs with changing the RSE angular frequency.

  19. Atomic-Scale Time and Space Resolution of Terahertz Frequency Acoustic Waves

    Science.gov (United States)

    Reed, Evan J.; Armstrong, Michael R.; Kim, Ki-Yong; Glownia, James H.

    2008-07-01

    Using molecular dynamics simulations and analytics, we find that strain waves of terahertz frequencies can coherently generate radiation when they propagate past an interface between materials with different piezoelectric coefficients. By considering AlN/GaN heterostructures, we show that the radiation is of detectable amplitude and contains sufficient information to determine the time dependence of the strain wave with potentially subpicosecond, nearly atomic time and space resolution. We demonstrate this phenomenon within the context of high amplitude terahertz frequency strain waves that spontaneously form at the front of shock waves in GaN crystals.

  20. Prediction of scaling physics laws for proton acceleration with extended parameter space of the NIF ARC

    Science.gov (United States)

    Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy

    2017-10-01

    The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  1. APPROACHING REGIONAL DEVELOPMENT STRATEGIES FROM THE PERSPECTIVE OF THE SYSTEMIC VIEW ON THE LOCAL SPACE

    Directory of Open Access Journals (Sweden)

    Doru BOTEZAT

    2014-10-01

    Full Text Available After the first theories of the local development (congestion and scale effects, the contemporary socio–economical sciences have refined the concept and added to the equation of the regional development a more complex variable: the local system seen as a whole. To be more precise, by this method the analysis regards not only the regional variables as factors which determine or influence a certain trajectory of the development (as in the Marshall case, but also the variable becomes the local system itself characterized by unity and its own dynamic. The economics owes this systemic approach to a more general philosophy which emerged within the social sciences. This is why, in order to understand the concept of local system from the economic point of view, we need to situate it in a wider profile of the systemic approach within the social sciences. This paper attempts to connect the arguments proposed by systemic analysis, a fashionable topic in the mainstream of the ’80 years of the previous century with the contemporary strategic approach towards regional development.

  2. Multi-time scale stream flow predictions: The support vector machines approach

    Science.gov (United States)

    Asefa, Tirusew; Kemblowski, Mariush; McKee, Mac; Khalil, Abedalrazq

    2006-03-01

    Effective lead-time stream flow forecast is one of the key aspects of successful water resources management in arid regions. In this research, we present new data-driven models based on Statistical Learning Theory that were used to forecast flows at two time scales: seasonal flow volumes and hourly stream flows. The models, known as Support Vector Machines, are learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space, and are trained with a learning algorithm from optimization theory. They are based on a principle that aims at minimizing the generalized model error (risk), rather than just the mean square error over a training set. Due to Mercer's condition on the kernels the corresponding optimization problems are convex and hence have no local minima. Empirical results from these models showed a promising performance in solving site-specific, real-time water resources management problems. Stream flow was forecasted using local-climatological data and requiring far less input than physical models. In addition, seasonal flow volume predictions were improved by incorporating atmospheric circulation indicators. Specifically, use of the North-Pacific Sea Surface Temperature Anomalies (SSTA) improved flow volume predictions.

  3. The ESI scale, an ethical approach to the evaluation of seismic hazards

    Science.gov (United States)

    Porfido, Sabina; Nappi, Rosa; De Lucia, Maddalena; Gaudiosi, Germana; Alessio, Giuliana; Guerrieri, Luca

    2015-04-01

    The dissemination of correct information about seismic hazard is an ethical duty of scientific community worldwide. A proper assessment of a earthquake severity and impact should not ignore the evaluation of its intensity, taking into account both the effects on humans, man-made structures, as well as on the natural evironment. We illustrate the new macroseismic scale that measures the intensity taking into account the effects of earthquakes on the environment: the ESI 2007 (Environmental Seismic Intensity) scale (Michetti et al., 2007), ratified by the INQUA (International Union for Quaternary Research) during the XVII Congress in Cairns (Australia). The ESI scale integrates and completes the traditional macroseismic scales, of which it represents the evolution, allowing to assess the intensity parameter also where buildings are absent or damage-based diagnostic elements saturate. Each degree reflects the corresponding strength of an earthquake and the role of ground effects, evaluating the Intensity on the basis of the characteristics and size of primary (e.g. surface faulting and tectonic uplift/subsidence) and secondary effects (e.g. ground cracks, slope movements, liquefaction phenomena, hydrological changes, anomalous waves, tsunamis, trees shaking, dust clouds and jumping stones). This approach can be considered "ethical" because helps to define the real scenario of an earthquake, regardless of the country's socio-economic conditions and level of development. Here lies the value and the relevance of macroseismic scales even today, one hundred years after the death of Giuseppe Mercalli, who conceived the homonymous scale for the evaluation of earthquake intensity. For an appropriate mitigation strategy in seismic areas, it is fundamental to consider the role played by seismically induced effects on ground, such as active faults (size in length and displacement) and secondary effects (the total area affecting). With these perspectives two different cases

  4. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  5. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China.

    Science.gov (United States)

    Hu, Maochuan; Sayama, Takahiro; Zhang, Xingqi; Tanaka, Kenji; Takara, Kaoru; Yang, Hong

    2017-05-15

    Low impact development (LID) has attracted growing attention as an important approach for urban flood mitigation. Most studies evaluating LID performance for mitigating floods focus on the changes of peak flow and runoff volume. This paper assessed the performance of LID practices for mitigating flood inundation hazards as retrofitting technologies in an urbanized watershed in Nanjing, China. The findings indicate that LID practices are effective for flood inundation mitigation at the watershed scale, and especially for reducing inundated areas with a high flood hazard risk. Various scenarios of LID implementation levels can reduce total inundated areas by 2%-17% and areas with a high flood hazard level by 6%-80%. Permeable pavement shows better performance than rainwater harvesting against mitigating urban waterlogging. The most efficient scenario is combined rainwater harvesting on rooftops with a cistern capacity of 78.5 mm and permeable pavement installed on 75% of non-busy roads and other impervious surfaces. Inundation modeling is an effective approach to obtaining the information necessary to guide decision-making for designing LID practices at watershed scales. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  7. A Modeling Approach Across Length Scales for Progressive Failure Analysis of Woven Composites

    Science.gov (United States)

    Mao, J. Z.; Sun, X. S.; Ridha, M.; Tan, V. B. C.; Tay, T. E.

    2013-06-01

    This paper presents a multiscale modeling approach for the progressive failure analysis of carbon-fiber-reinforced woven composite materials. Hierarchical models of woven composites at three different length scales (micro, meso, and macro) were developed according to their unique geometrical and material characteristics. A novel strategy of two-way information transfer is developed for the multiscale analysis of woven composites. In this strategy, the macroscopic effective material properties are obtained from property homogenizations at micro and meso scales and the stresses at three length scales are computed with stress amplification method from macroscale to microscale. By means of the two-way information transfer, the micro, meso and macro structural characterizations of composites are carried out so that the micromechanisms of damage and their interactions are successfully investigated in a single macro model. In addition, both the nucleation and growth of damages are tracked during the progressive failure analysis. A continuum damage mechanics (CDM) method is used for post-failure modeling. The material stiffness, tensile strength and damage patterns of an open-hole woven composite laminate are predicted with the proposed multiscale method. The predictions are in good agreement with the experimental results.

  8. Genome-scale modeling of human metabolism - a systems biology approach.

    Science.gov (United States)

    Mardinoglu, Adil; Gatto, Francesco; Nielsen, Jens

    2013-09-01

    Altered metabolism is linked to the appearance of various human diseases and a better understanding of disease-associated metabolic changes may lead to the identification of novel prognostic biomarkers and the development of new therapies. Genome-scale metabolic models (GEMs) have been employed for studying human metabolism in a systematic manner, as well as for understanding complex human diseases. In the past decade, such metabolic models - one of the fundamental aspects of systems biology - have started contributing to the understanding of the mechanistic relationship between genotype and phenotype. In this review, we focus on the construction of the Human Metabolic Reaction database, the generation of healthy cell type- and cancer-specific GEMs using different procedures, and the potential applications of these developments in the study of human metabolism and in the identification of metabolic changes associated with various disorders. We further examine how in silico genome-scale reconstructions can be employed to simulate metabolic flux distributions and how high-throughput omics data can be analyzed in a context-dependent fashion. Insights yielded from this mechanistic modeling approach can be used for identifying new therapeutic agents and drug targets as well as for the discovery of novel biomarkers. Finally, recent advancements in genome-scale modeling and the future challenge of developing a model of whole-body metabolism are presented. The emergent contribution of GEMs to personalized and translational medicine is also discussed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  10. Wine consumers’ preferences in Spain: an analysis using the best-worst scaling approach

    Directory of Open Access Journals (Sweden)

    Tiziana de-Magistris

    2014-06-01

    Full Text Available Research on wine consumers’ preferences has largely been explored in the academic literature and the importance of wine attributes has been measured by rating or ranking scales. However, the most recent literature on wine preferences has applied the best-worst scaling approach to avoid the biased outcomes derived from using rating or ranking scales in surveys. This study investigates premium red wine consumers’ preferences in Spain by applying best-worst alternatives. To achieve this goal, a random parameter logit model is applied to assess the impacts of wine attributes on the probability of choosing premium quality red wine by using data from an ad-hoc survey conducted in a medium-sized Spanish city. The results suggest that some wine attributes related to past experience (i.e. it matches food followed by some related to personal knowledge (i.e. the designation of origin are valued as the most important, whereas other attributes related to the image of the New World (i.e. label or brand name are perceived as the least important or indifferent.

  11. A multi-objective stochastic approach to combinatorial technology space exploration

    Science.gov (United States)

    Patel, Chirag B.

    Historically, aerospace development programs have frequently been marked by performance shortfalls, cost growth, and schedule slippage. New technologies included in systems are considered to be one of the major sources of this programmatic risk. Decisions regarding the choice of technologies to include in a design are therefore crucial for a successful development program. This problem of technology selection is a challenging exercise in multi-objective decision making. The complexity of this selection problem is compounded by the geometric growth of the combinatorial space with the number of technologies being considered and the uncertainties inherent in the knowledge of the technological attributes. These problems are not typically addressed in the selection methods employed in common practice. Consequently, a method is desired to aid the selection of technologies for complex systems design with consideration of the combinatorial complexity, multi-dimensionality, and the presence of uncertainties. Several categories of techniques are explored to address the shortcomings of current approaches and to realize the goal of an efficient and effective combinatorial technology space exploration method. For the multi-objective decision making, a posteriori preference articulation is implemented. To realize this, a stochastic algorithm for Pareto optimization is formulated based on the concepts of SPEA2. Techniques to address the uncertain nature of technology impact on the system are also examined. Monte Carlo simulations using the surrogate models are used for uncertainty quantification. The concepts of graph theory are used for modeling and analyzing compatibility constraints among technologies and assessing their impact on the technology combinatorial space. The overall decision making approach is enabled by the application of an uncertainty quantification technique under the framework of an efficient probabilistic Pareto optimization algorithm. As a result, multiple

  12. Quantifying Aeolian Flow-Landform Interactions Using Novel Lab-Scale Experimental Approaches

    Science.gov (United States)

    Christensen, K. T.; Bristow, N.; Hamed, A. M.; Kim, T.; Blois, G.; Best, J.

    2016-12-01

    Aeolian transport processes are driven by coupled interactions of flow with complex and dynamic topography. The complexity of this coupling inhibits predictions of sediment transport, landscape morphodynamics and concomitant biophysical and geochemical processes. Many of these flows occur in conditions and/or at scales that limit or completely impede access via modern flow diagnostics due to geometry and/or coexistence of multiple phases. Given the broad range of scales of such flows, modeling at small scales is required to enable predictive simulations. It is at these scales where experiments can inform model development that accurately reflect the physics of such processes to yield reliable system-scale predictions. This lecture will highlight specific two laboratory studies: turbulent flow associated with interacting barchan dunes and the flow overlying a model crater representative of that observed on Mars. The evolution of and dynamics associated with barchan dunes involve a strong degree of coupling between sediment transport, morphological change, and flow, the last of which represents the weakest link in our understanding of barchan morphodynamics. Newly available morphological data from high-resolution images from orbiting NASA spacecraft, complemented by on-site observations, is supporting the paleoscientific reconstruction of Mars environmental conditions. Central to this goal is understanding the geomorphology of Mars craters, including morphological processes that control their evolution for those that host a central mound. The 3D nature of both landforms presents challenges for measuring the full flow field. We therefore utilize a novel refractive index matching (RIM) approach coupled with particle-image velocimetry (PIV) methods to fully interrogate the flow around fixed barchan dune models in tandem and a crater model formed from a DEM of Gale Crater. The barchan and crater models were fabricated from acrylic whose refractive index matches the

  13. Jupiter's auroras during the Juno approach phase as observed by the Hubble Space Telescope

    Science.gov (United States)

    Nichols, J. D.; Clarke, J. T.; Orton, G. S.; Cowley, S. W. H.; Bunce, E. J.; Stallard, T.; Badman, S. V.; Grodent, D. C.; Bonfond, B.; Radioti, K.; Gerard, J. C. M. C.; Gladstone, R.; Bagenal, F.; Connerney, J. E. P.; Valek, P. W.; Ebert, R. W.; McComas, D. J.; Mauk, B.; Clark, G. B.; Kurth, W. S.; Yoshikawa, I.; Kimura, T.; Fujimoto, M.; Tao, C.; Bolton, S. J.

    2016-12-01

    We present movies of the Hubble Space Telescope (HST) observations of Jupiter's FUV auroras observed during the Juno approach phase and first capture orbit, and compare with Juno observations of the interplanetary medium near Jupiter and inside the magnetosphere. Jupiter's FUV auroras indicate the nature of the dynamic processes occurring in Jupiter's magnetosphere, and the approach phase provided a unique opportunity to obtain a full set of interplanetary data near to Jupiter at the time of a program of HST observations, along with the first simultaneous with Juno observations inside the magnetosphere. The overall goal was to determine the nature of the solar wind effect on Jupiter's magnetosphere. HST observations were obtained with typically 1 orbit per day over three intervals: 16 May - 7 June, 22-30 June and 11-18 July, i.e. while Juno was in the solar wind, around the bow shock and magnetosphere crossings, and in the mid-latitude middle-outer magnetospheres. We show that these intervals are characterised by particularly dynamic polar auroras, and significant variations in the auroral power output caused by e.g. dawn storms, intense main emission and poleward forms. We compare the variation of these features with Juno observations of interplanetary compression regions and the magnetospheric environment during the intervals of these observations.

  14. A Bayesian state-space approach for damage detection and classification

    Science.gov (United States)

    Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.

    2017-11-01

    The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.

  15. "Non-cold" dark matter at small scales: a general approach

    Science.gov (United States)

    Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.

    2017-11-01

    Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.

  16. Monitoring Urbanization Processes from Space: Using Landsat Imagery to Detect Built-Up Areas at Scale

    Science.gov (United States)

    Goldblatt, R.; You, W.; Hanson, G.; Khandelwal, A. K.

    2016-12-01

    Urbanization is one of the most fundamental trends of the past two centuries and a key force shaping almost all dimensions of modern society. Monitoring the spatial extent of cities and their dynamics be means of remote sensing methods is crucial for many research domains, as well as to city and regional planning and to policy making. Yet the majority of urban research is being done in small scales, due, in part, to computational limitation. With the increasing availability of parallel computing platforms with large storage capacities, such as Google Earth Engine (GEE), researchers can scale up the spatial and the temporal units of analysis and investigate urbanization processes over larger areas and over longer periods of time. In this study we present a methodology that is designed to capture temporal changes in the spatial extent of urban areas at the national level. We utilize a large scale ground-truth dataset containing examples of "built-up" and "not built-up" areas from across India. This dataset, which was collected based on 2016 high-resolution imagery, is used for supervised pixel-based image classification in GEE. We assess different types of classifiers and inputs and demonstrate that with Landsat 8 as the classifier`s input, Random Forest achieves a high accuracy rate of around 87%. Although performance with Landsat 8 as the input exceeds that of Landsat 7, with the addition of several per-pixel computed indices to Landsat 7 - NDVI, NDBI, MNDWI and SAVI - the classifier`s sensitivity improves by around 10%. We use Landsat 7 to detect temporal changes in the extent of urban areas. The classifier is trained with 2016 imagery as the input - for which ground truth data is available - and is used the to detect urban areas over the historical imagery. We demonstrate that this classification produces high quality maps of urban extent over time. We compare the classification result with numerous datasets of urban areas (e.g. MODIS, DMSP-OLS and WorldPop) and

  17. Decision Envelopment Analysis of space and terrestrially-based large scale commercial power systems for earth

    Science.gov (United States)

    Criswell, David R.; Thompson, Russell G.

    1992-08-01

    Decision Envelopment Analysis (DEA), the detailed quantitative comparison of alternative economic systems, is used to compare the technical efficiency of the large-scale power systems needed to meet the growing energy needs of terrestrial society. The Lunar Power System (LPS) captures sunlight on the moon, converts it to microwaves and beams the power to receivers on earth that output electricity. In terms of benefits versus costs, normalized to the range of 0 to 1, DEA reveals that LPS is at least ten times more efficient than conventional terrestrial solar-thermal and -photovoltaic, fossil, and nuclear systems. LPS is also environmentally benign compared to the conventional systems.

  18. Life-Space Assessment scale to assess mobility: validation in Latin American older women and men.

    Science.gov (United States)

    Curcio, Carmen-Lucia; Alvarado, Beatriz E; Gomez, Fernando; Guerra, Ricardo; Guralnik, Jack; Zunzunegui, Maria Victoria

    2013-10-01

    The Life-Space Assessment (LSA) instrument of the University of Alabama and Birmingham study is a useful and innovative measure of mobility in older populations. The purpose of this article was to assess the reliability, construct and convergent validity of the LSA in Latin American older populations. In a cross-sectional study, a total of 150 women and 150 men, aged 65-74 years, were recruited from seniors' community centers in Manizales, Colombia and Natal, Brazil. The LSA questionnaire summarizes where people travel (5 levels from room to places outside of town), how often and any assistance needed. Four LSA variables were obtained according to the maximum life space achieved and the level of independence. As correlates of LSA, education, perception of income sufficiency, depression, cognitive function, and functional measures (objective and subjectively measured) were explored. The possible modifying effect of the city on correlates of LSA was examined. Reliability for the composite LSA score was substantial (ICC = 0.70; 95 % CI 0.49-0.83) in Manizales. Average levels of LSA scores were higher in those with better functional performance and those who reported less mobility difficulties. Low levels of education, insufficient income, depressive symptoms, and low scores of cognitive function were all significantly related to lower LSA scores. Women in both cities were more likely to be restricted to their neighborhood and had lower LSA scores. This study provides evidence for the validity of LSA in two Latin American populations. Our results suggest that LSA is a good measure of mobility that reflects the interplay of physical functioning with gender and the social and physical environment.

  19. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  20. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  1. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  2. Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.

    Science.gov (United States)

    Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David

    2017-08-29

    Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl(-) ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl(-) concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl(-) distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.

  3. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    Science.gov (United States)

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  4. A Multiblock Approach to Pore-Scale Modeling of Reactive Transport with Applications to Carbon Sequestration

    Science.gov (United States)

    mehmani, Y.; Sun, T.; Balhoff, M.; Bryant, S. L.; Eichhubl, P.

    2012-12-01

    In order to safely store CO2 in depleted reservoirs and deep saline aquifers, a better understanding of the storage mechanisms of CO2 is required. Reaction of CO2 with minerals to form precipitate in the subsurface helps to securely store CO2 over geologic time periods, but a concern is the formation of localized channels through which CO2 could travel at large, localized rates. Pore-scale network modeling is an attractive option for modeling and understanding this inherently pore-level process, but the relatively small domains of network models may prevent capturing of any such "emergent phenomena" and more importantly their study. Here, we develop a transient, single-phase, reactive pore-network model that includes reduction of throat conductivity as a result of precipitation. The novelty of this work is the implementation of a new Mortar/Transport method for coupling pore networks together at model interfaces that ensure continuity of pressures, species concentrations, and fluxes. Coupled sub-domains are solved separately in parallel and information is effectively communicated between them via the coupling process. The multiscale method can be further applied to modeling of multi-species/multiphase transport phenomena in highly heterogeneous media arising in various subsurface applications, and may potentially be applied to the seamless inclusion of pore-scale models in continuum simulators. The coupling allows for modeling at larger scales which may lead to more accurate upscaling approaches. Here, we couple pore-scale models with large variation in permeability and porosity which results initial preferential pathways for flow. Our simulation results suggest that the preferential pathways close in time due to precipitation, but are not redirected at late times.

  5. A novel approach for BCR-ABL1 standardization to improve International Scale estimation.

    Science.gov (United States)

    Maes, B; Bakkus, M; Boeckx, N; Boone, E; Cauwelier, B; Denys, B; De Schouwer, P; Devos, T; El Housni, H; Hillen, F; Jacobs, K; Lambert, F; Louagie, H; Maes, M-B; Meeus, P; Moreau, E; Nollet, F; Peeters, K; Saussoy, P; Van Lint, P; Vaerman, J-L; Vaeyens, F; Vandepoele, K; Vannuffel, P; Ver Elst, K; Vermeulen, K; Bruyndonckx, R

    2016-12-01

    Standardization of BCR-ABL1 messenger RNA quantification by real-time PCR on the International Scale (IS) is critical for monitoring therapy response in chronic myelogenous leukaemia. Since 2006, BCR-ABL1 IS standardization is propagated along reference laboratories by calculating a laboratory-specific conversion factor (CF), co-ordinated in Europe through the European Treatment and Outcome Study project. Although this process has proven successful to some extent, it has not been achievable for all laboratories due to the complexity of the process and the stringent requirements in terms of numbers of samples to be exchanged. In addition, several BCR-ABL1 IS quantification methods and secondary reference materials became commercially available. However, it was observed that different IS methods generate consistently different results. To overcome these difficulties, we have developed an alternative and simple approach of CF calculation, based on the retrospective analysis of existing external quality assessment (EQA) data. Our approach does not depend on the exchange of samples and is solely based on the mathematical CF calculation using EQA results. We have demonstrated by thorough statistical validation that this approach performs well in converting BCR-ABL1 measurements to improve IS estimation. In expectation of a true golden standard method for BCR-ABL1 IS quantification, the proposed method is a valuable alternative. © 2016 John Wiley & Sons Ltd.

  6. A novel scaling approach for sooting laminar coflow flames at elevated pressures

    Science.gov (United States)

    Abdelgadir, Ahmed; Steinmetz, Scott A.; Attili, Antonio; Bisetti, Fabrizio; Roberts, William L.

    2016-11-01

    Laminar coflow diffusion flames are often used to study soot formation at elevated pressures due to their well-characterized configuration. In these expriments, these flames are operated at constant mass flow rate (constant Reynolds number) at increasing pressures. Due to the effect of gravity, the flame shape changes and as a results, the mixing field changes, which in return has a great effect on soot formation. In this study, a novel scaling approach of the flame at different pressures is proposed. In this approach, both the Reynolds and Grashof's numbers are kept constant so that the effect of gravity is the same at all pressures. In order to keep the Grashof number constant, the diameter of the nozzle is modified as pressure varies. We report both numerical and experimental data proving that this approach guarantees the same nondimensional flow fields over a broad range of pressures. In the range of conditions studied, the Damkoehler number, which varies when both Reynolds and Grashof numbers are kept constant, is shown to play a minor role. Hence, a set of suitable flames for investigating soot formation at pressure is identified. This research made use of the resources of IT Research Computing at King Abdullah University of Science & Technology (KAUST), Saudi Arabia.

  7. Scaling of respiratory variables and the breathing pattern in birds: an allometric and phylogenetic approach.

    Science.gov (United States)

    Frappell, P B; Hinds, D S; Boggs, D F

    2001-01-01

    Allometric equations can be useful in comparative physiology in a number of ways, not the least of which include assessing whether a particular species deviates from the norm for its size and phylogenetic group with respect to some specific physiological process or determining how differences in design among groups may be reflected in differences in function. The allometric equations for respiratory variables in birds were developed 30 yr ago by Lasiewski and Calder and presented as "preliminary" because they were based on a small number of species. With the expanded data base now available to reconstruct these allometries and the call for taking account of the nonindependence of species in this process through a phylogenetically independent contrasts (PIC) approach, we have developed new allometric equations for respiratory variables in birds using both the traditional and PIC approaches. On the whole, the new equations agree with the old ones with only minor changes in the coefficients, and the primary difference between the traditional and PIC approaches is in the broader confidence intervals given by the latter. We confirm the lower VE/VO2 ratio for birds compared to mammals and observe a common scaling of inspiratory flow and oxygen consumption for birds as has been reported for mammals. Use of allometrics and comparisons among avian groups are also discussed.

  8. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  9. An Interdisciplinary Approach to Developing Renewable Energy Mixes at the Community Scale

    Science.gov (United States)

    Gormally, Alexandra M.; Whyatt, James D.; Timmis, Roger J.; Pooley, Colin G.

    2013-04-01

    Renewable energy has risen on the global political agenda due to concerns over climate change and energy security. The European Union (EU) currently has a target of 20% renewable energy by the year 2020 and there is increasing focus on the ways in which these targets can be achieved. Here we focus on the UK context which could be considered to be lagging behind other EU countries in terms of targets and implementation. The UK has a lower overall target of 15% renewable energy by 2020 and in 2011 reached only 3.8 % (DUKES, 2012), one of the lowest progressions compared to other EU Member States (European Commission, 2012). The reticence of the UK to reach such targets could in part be due to their dependence on their current energy mix and a highly centralised electricity grid system, which does not lend itself easily to the adoption of renewable technologies. Additionally, increasing levels of demand and the need to raise energy awareness are key concerns in terms of achieving energy security in the UK. There is also growing concern from the public about increasing fuel and energy bills. One possible solution to some of these problems could be through the adoption of small-scale distributed renewable schemes implemented at the community-scale with local ownership or involvement, for example, through energy co-operatives. The notion of the energy co-operative is well understood elsewhere in Europe but unfamiliar to many UK residents due to its centralised approach to energy provision. There are many benefits associated with engaging in distributed renewable energy systems. In addition to financial benefits, participation may raise energy awareness and can lead to positive responses towards renewable technologies. Here we briefly explore how a mix of small-scale renewables, including wind, hydro-power and solar PV, have been implemented and managed by a small island community in the Scottish Hebrides to achieve over 90% of their electricity needs from renewable

  10. Space-Time Dynamics of Soil Moisture and Temperature: Scale issues

    Science.gov (United States)

    Mohanty, Binayak P.; Miller, Douglas A.; Th.vanGenuchten, M.

    2003-01-01

    The goal of this project is to gain further understanding of soil moisture/temperature dynamics at different spatio-temporal scales and physical controls/parameters.We created a comprehensive GIS database, which has been accessed extensively by NASA Land Surface Hydrology investigators (and others), is located at the following URL: http://www.essc.psu.edu/nasalsh. For soil moisture field experiments such as SGP97, SGP99, SMEX02, and SMEX03, cartographic products were designed for multiple applications, both pre- and post-mission. Premission applications included flight line planning and field operations logistics, as well as general insight into the extent and distribution of soil, vegetation, and topographic properties for the study areas. The cartographic products were created from original spatial information resources that were imported into Adobe Illustrator, where the maps were created and PDF versions were made for distribution and download.

  11. An innovative approach to supplying an environment for the integration and test of the Space Station distributed avionics systems

    Science.gov (United States)

    Barry, Thomas; Scheffer, Terrance; Small, L. R.

    1988-01-01

    This paper describes an innovative approach to supplying an environment for the integration and testing of the Space Station distributed avionics systems. The environment's relationship to the process flow of the Space Station verification from systems development to on-orbit verification is presented. This paper also describes the uses of the environment's hardware implementation called Data Management System (DMS) kits. The way in which this environment allows system developers to independently verify their system's performance, fault detection, and recovery capability is explained.

  12. EMAPS: An Efficient Multiscale Approach to Plasma Systems with Non-MHD Scale Effects

    Energy Technology Data Exchange (ETDEWEB)

    Omelchenko, Yuri A. [Trinum Research, Inc., San Diego, CA (United States)

    2016-08-08

    Global interactions of energetic ions with magnetoplasmas and neutral gases lie at the core of many space and laboratory plasma phenomena ranging from solar wind entry into and transport within planetary magnetospheres and exospheres to fast-ion driven instabilities in fusion devices to astrophysics-in-lab experiments. The ability of computational models to properly account for physical effects that underlie such interactions, namely ion kinetic, ion cyclotron, Hall, collisional and ionization processes is important for the success and planning of experimental research in plasma physics. Understanding the physics of energetic ions, in particular their nonlinear resonance interactions with Alfvén waves, is central to improving the heating performance of magnetically confined plasmas for future energy generation. Fluid models are not adequate for high-beta plasmas as they cannot fully capture ion kinetic and cyclotron physics (e.g., ion behavior in the presence of magnetic nulls, shock structures, plasma interpenetration, etc.). Recent results from global reconnection simulations show that even in a MHD-like regime there may be significant differences between kinetic and MHD simulations. Therefore, kinetic modeling becomes essential for meeting modern day challenges in plasma physics. The hybrid approximation is an intermediate approximation between the fluid and fully kinetic approximations. It eliminates light waves, removes the electron inertial temporal and spatial scales from the problem and enables full-orbit ion kinetics. As a result, hybrid codes have become effective tools for exploring ion-scale driven phenomena associated with ion beams, shocks, reconnection and turbulence that control the large-scale behavior of laboratory and space magnetoplasmas. A number of numerical issues, however, make three-dimensional (3D) large-scale hybrid simulations of inhomogeneous magnetized plasmas prohibitively expensive or even impossible. To resolve these difficulties

  13. A Paradigm Confronting Reality: The River Basin Approach and Local Water Management Spaces in the Pucara Basin, Bolivia

    Directory of Open Access Journals (Sweden)

    Vladimir Cossío

    2017-02-01

    Full Text Available The current Bolivian water policy incorporates the IWRM paradigm adopting the river basin as the space for water management in the country. The linkage of water management with communal territories in the Andes challenges the application of the river basin approach, bringing water spaces into the discussion. Considering the example of the Pucara River Basin, the article uses space theory to identify characteristics of local spaces for water management and to contrast them with the river basin concept. The river basin concept is applied by water professionals, mostly taking the perceived dimension of this space into consideration and sometimes in abstract terms. In contrast, the lived dimension of space is more important in local water management spaces and it is not represented in abstract terms. Local water spaces are flexible and strongly related to local organisations, which allows them to respond appropriately to the needs and demands of peasant society in the area, characteristics that cannot be found in the river basin space.

  14. Designing Groundwater Monitoring Networks for Regional-Scale Water Quality Assessment: A Bayesian Approach

    Science.gov (United States)

    Pinto, M. J.; Wagner, B. J.

    2002-12-01

    The design of groundwater monitoring networks is an important concern of regional-scale water-quality assessment programs because of the high cost of data collection. The work presented here addresses regional-scale design issues using ground-water simulation and optimization set within a Bayesian framework. The regional-scale design approach focuses on reducing the uncertainty associated with a fundamental quantity: the proportion of a subsurface water resource which exceeds a specified threshold concentration, such as a mandated maximum contaminant level. This proportion is hereafter referred to as the threshold proportion. The goal is to identify optimal or near-optimal sampling designs that reduce the threshold proportion uncertainty to an acceptable level. In the Bayesian approach, there is a probability density function (pdf) associated with the unknown threshold proportion before sampling. This function is known as the prior pdf. The form of the prior pdf, which is dependent on the information available regarding the distribution of water quality within the aquifer system, controls the amount of sampling needed. In the absence of information, the form of the prior pdf is uniform; however, if a ground-water flow and transport model is available, a Monte Carlo analysis of ground-water flow and transport simulations can be used to generate a prior pdf which is non-uniform and which contains the information available regarding solute sources, pathways and transport. After sampling, the prior pdf is conditioned on the sampling data. The conditional distribution is known as the posterior pdf. In most cases there is a reduction in uncertainty associated with conditioning. The reduction in uncertainty achieved after collecting samples can be explored for different combinations of prior pdf distribution and sampling method. Three scenarios are considered: (i) uniform prior pdf with random sampling; (ii) non-uniform prior pdf with random sampling; and (iii) non

  15. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    Science.gov (United States)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  16. Summer school in the field of Space Technologies: A novel approach for teenage education

    Science.gov (United States)

    Dolea, Paul; Vladut Dascal, Paul

    2014-05-01

    This paper presents the main practical aspects regarding the organization of a summer school in the field of Space Technologies and Radio Science. This one-week summer school is aimed for education of teenagers between 12 and 16 years. Currently, the summer school reached its third edition. During this educational activities some especially designed prototype equipments were used with the main purpose of educating adolescents towards a scientific career in the field of Space Technologies and Radio Science. The main equipments and associated experiments are presented as follows: 1. A teaching purpose radio telescope emphasizing the working principle of professional radio telescopes. The experiments were focused on scanning the sky for identifying the positions of geostationary satellites and the Sun. 2. A weather satellite reception equipment used for downloading real-time APT (Automatic Picture Transmission) weather data from NOAA (National Oceanic and Atmospheric Administration) weather satellite fleet. The visual images were used for emphasizing the clouds and cloud systems over Europe. 3. A prototype equipment for receiving electromagnetic waves in the field of VLF (Very Low Frequency) with the purpose of analyzing the electromagnetic radio frequency spectrum. The main emphasized phenomenons in the VLF band (3 kHz - 30 kHz) are related to radio transmitters, electrical discharges in the atmosphere (lightning) and the electromagnetic pollution. 4. An equipment designed for initiating teenagers in the field of radio communication. This equipment was used for transmission and reception of images and sound over a distance of few kilometers, by using high-gain directional antennas. 5. Other sets of experiments were undertaken with the main purpose of mapping the countryside area in which the experiments had taken place. For this activity GPS devices were used. This paper may be considered a practical guideline for those who want to attract young students towards a

  17. Coral growth on three reefs: development of recovery benchmarks using a space for time approach

    Science.gov (United States)

    Done, T. J.; Devantier, L. M.; Turak, E.; Fisk, D. A.; Wakeford, M.; van Woesik, R.

    2010-12-01

    This 14-year study (1989-2003) develops recovery benchmarks based on a period of very strong coral recovery in Acropora-dominated assemblages on the Great Barrier Reef (GBR) following major setbacks from the predatory sea-star Acanthaster planci in the early 1980s. A space for time approach was used in developing the benchmarks, made possible by the choice of three study reefs (Green Island, Feather Reef and Rib Reef), spread along 3 degrees of latitude (300 km) of the GBR. The sea-star outbreaks progressed north to south, causing death of corals that reached maximum levels in the years 1980 (Green), 1982 (Feather) and 1984 (Rib). The reefs were initially surveyed in 1989, 1990, 1993 and 1994, which represent recovery years 5-14 in the space for time protocol. Benchmark trajectories for coral abundance, colony sizes, coral cover and diversity were plotted against nominal recovery time (years 5-14) and defined as non-linear functions. A single survey of the same three reefs was conducted in 2003, when the reefs were nominally 1, 3 and 5 years into a second recovery period, following further Acanthaster impacts and coincident coral bleaching events around the turn of the century. The 2003 coral cover was marginally above the benchmark trajectory, but colony density (colonies.m-2) was an order of magnitude lower than the benchmark, and size structure was biased toward larger colonies that survived the turn of the century disturbances. The under-representation of small size classes in 2003 suggests that mass recruitment of corals had been suppressed, reflecting low regional coral abundance and depression of coral fecundity by recent bleaching events. The marginally higher cover and large colonies of 2003 were thus indicative of a depleted and aging assemblage not yet rejuvenated by a strong cohort of recruits.

  18. Simulation of saturated and unsaturated flow in karst systems at catchment scale using a double continuum approach

    Directory of Open Access Journals (Sweden)

    J. Kordilla

    2012-10-01

    Full Text Available The objective of this work is the simulation of saturated and unsaturated flow in a karstified aquifer using a double continuum approach. The HydroGeoSphere code (Therrien et al., 2006 is employed to simulate spring discharge with the Richards equations and van Genuchten parameters to represent flow in the (1 fractured matrix and (2 conduit continuum coupled by a linear exchange term. Rapid vertical small-scale flow processes in the unsaturated conduit continuum are accounted for by applying recharge boundary conditions at the bottom of the saturated model domain. An extensive sensitivity analysis is performed on single parameters as well as parameter combinations. The transient hydraulic response of the karst spring is strongly controlled by the matrix porosity as well as the van Genuchten parameters of the unsaturated matrix, which determine the head dependent inter-continuum water transfer when the conduits are draining the matrix. Sensitivities of parameter combinations partially reveal a non-linear dependence over the parameter space. This can be observed for parameters not belonging to the same continuum as well as combinations, which involve the exchange parameter, showing that results of the double continuum model may depict a certain degree of ambiguity. The application of van Genuchten parameters for simulation of unsaturated flow in karst systems is critically discussed.

  19. An approach to computerized preliminary design procedure of mid-size superyachts from hull modeling to interior space arrangement

    Directory of Open Access Journals (Sweden)

    Jong-Ho Nam

    2010-06-01

    Full Text Available A concept of preliminary design for mid-size superyachts is explored. First, the profile of a superyacht is interactively designed with the help of freeform curve functionality and graphical user interface (GUI based interaction. The hull form is then constructed using major characteristic curves such as design waterline, deck sideline, and sections in addition to the predefined profile curve. After exterior hull modeling is done, the arrangement of significant interior spaces of all decks is carried out. A genetic algorithm is exploited to find a space arrangement by considering space fitness values, space proximity, and stairs connectivity of relevant spaces. A goal of the paper is to offer a step-by-step procedure for superyacht design from scratch or when initial information is not sufficient for complete design. For this purpose, a GUI based superyacht design system is developed. This design approach is expected to help users interactively design mid-size superyachts.

  20. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.