WorldWideScience

Sample records for scale space approach

  1. Physics in space-time with scale-dependent metrics

    Science.gov (United States)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  2. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  3. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    Science.gov (United States)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  4. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  5. Quantum universe on extremely small space-time scales

    International Nuclear Information System (INIS)

    Kuzmichev, V.E.; Kuzmichev, V.V.

    2010-01-01

    The semiclassical approach to the quantum geometrodynamical model is used for the description of the properties of the Universe on extremely small space-time scales. Under this approach, the matter in the Universe has two components of the quantum nature which behave as antigravitating fluids. The first component does not vanish in the limit h → 0 and can be associated with dark energy. The second component is described by an extremely rigid equation of state and goes to zero after the transition to large spacetime scales. On small space-time scales, this quantum correction turns out to be significant. It determines the geometry of the Universe near the initial cosmological singularity point. This geometry is conformal to a unit four-sphere embedded in a five-dimensional Euclidean flat space. During the consequent expansion of the Universe, when reaching the post-Planck era, the geometry of the Universe changes into that conformal to a unit four-hyperboloid in a five-dimensional Lorentzsignatured flat space. This agrees with the hypothesis about the possible change of geometry after the origin of the expanding Universe from the region near the initial singularity point. The origin of the Universe can be interpreted as a quantum transition of the system from a region in the phase space forbidden for the classical motion, but where a trajectory in imaginary time exists, into a region, where the equations of motion have the solution which describes the evolution of the Universe in real time. Near the boundary between two regions, from the side of real time, the Universe undergoes almost an exponential expansion which passes smoothly into the expansion under the action of radiation dominating over matter which is described by the standard cosmological model.

  6. Construction of Orthonormal Piecewise Polynomial Scaling and Wavelet Bases on Non-Equally Spaced Knots

    Directory of Open Access Journals (Sweden)

    Jean Pierre Astruc

    2007-01-01

    Full Text Available This paper investigates the mathematical framework of multiresolution analysis based on irregularly spaced knots sequence. Our presentation is based on the construction of nested nonuniform spline multiresolution spaces. From these spaces, we present the construction of orthonormal scaling and wavelet basis functions on bounded intervals. For any arbitrary degree of the spline function, we provide an explicit generalization allowing the construction of the scaling and wavelet bases on the nontraditional sequences. We show that the orthogonal decomposition is implemented using filter banks where the coefficients depend on the location of the knots on the sequence. Examples of orthonormal spline scaling and wavelet bases are provided. This approach can be used to interpolate irregularly sampled signals in an efficient way, by keeping the multiresolution approach.

  7. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  8. Generalized probabilistic scale space for image restoration.

    Science.gov (United States)

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  9. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...... with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  10. A scale invariant covariance structure on jet space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2005-01-01

    This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...

  11. Multi-Scale Singularity Trees: Soft-Linked Scale-Space Hierarchies

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    We consider images as manifolds embedded in a hybrid of a high dimensional space of coordinates and features. Using the proposed energy functional and mathematical landmarks, images are partitioned into segments. The nesting of image segments occurring at catastrophe points in the scale-space is ...

  12. A real-space stochastic density matrix approach for density functional electronic structure.

    Science.gov (United States)

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  13. Space Sustainment: A New Approach for America in Space

    Science.gov (United States)

    2014-12-01

    international community toward promoting market incentives in international space law. This would open up the competitive space for new entrants ...announces- new -space-situational-awareness-satellite-program.aspx. 29. Gruss, “U.S. Space Assets Face Growing Threat .” 30. McDougall, Heavens and the...November–December 2014 Air & Space Power Journal | 117 SCHRIEVER ESSAY WINNER SECOND PLACE Space Sustainment A New Approach for America in Space Lt

  14. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  15. A Principled Approach to the Specification of System Architectures for Space Missions

    Science.gov (United States)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  16. Statistical distance and the approach to KNO scaling

    International Nuclear Information System (INIS)

    Diosi, L.; Hegyi, S.; Krasznovszky, S.

    1990-05-01

    A new method is proposed for characterizing the approach to KNO scaling. The essence of our method lies in the concept of statistical distance between nearby KNO distributions which reflects their distinguishability in spite of multiplicity fluctuations. It is shown that the geometry induced by the distance function defines a natural metric on the parameter space of a certain family of KNO distributions. Some examples are given in which the energy dependences of distinguishability of neighbouring KNO distributions are compared in nondiffractive hadron-hadron collisions and electron-positron annihilation. (author) 19 refs.; 4 figs

  17. Properties of small-scale interfacial turbulence from a novel thermography based approach

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales

  18. Space-coiling fractal metamaterial with multi-bandgaps on subwavelength scale

    Science.gov (United States)

    Man, Xianfeng; Liu, Tingting; Xia, Baizhan; Luo, Zhen; Xie, Longxiang; Liu, Jian

    2018-06-01

    Acoustic metamaterials are remarkably different from conventional materials, as they can flexibly manipulate and control the propagation of sound waves. Unlike the locally resonant metamaterials introduced in earlier studies, we designed an ultraslow artificial structure with a sound speed much lower than that in air. In this paper, the space-coiling approach is proposed for achieving artificial metamaterial for extremely low-frequency airborne sound. In addition, the self-similar fractal technique is utilized for designing space-coiling Mie-resonance-based metamaterials (MRMMs) to obtain a band-dispersive spectrum. The band structures of two-dimensional (2D) acoustic metamaterials with different fractal levels are illustrated using the finite element method. The low-frequency bandgap can easily be formed, and multi-bandgap properties are observed in high-level fractals. Furthermore, the designed MRMMs with higher order fractal space coiling shows a good robustness against irregular arrangement. Besides, the proposed artificial structure was found to modify and control the radiation field arbitrarily. Thus, this work provides useful guidelines for the design of acoustic filtering devices and acoustic wavefront shaping applications on the subwavelength scale.

  19. Multi-scale Dynamical Processes in Space and Astrophysical Plasmas

    CERN Document Server

    Vörös, Zoltán; IAFA 2011 - International Astrophysics Forum 2011 : Frontiers in Space Environment Research

    2012-01-01

    Magnetized plasmas in the universe exhibit complex dynamical behavior over a huge range of scales. The fundamental mechanisms of energy transport, redistribution and conversion occur at multiple scales. The driving mechanisms often include energy accumulation, free-energy-excited relaxation processes, dissipation and self-organization. The plasma processes associated with energy conversion, transport and self-organization, such as magnetic reconnection, instabilities, linear and nonlinear waves, wave-particle interactions, dynamo processes, turbulence, heating, diffusion and convection represent fundamental physical effects. They demonstrate similar dynamical behavior in near-Earth space, on the Sun, in the heliosphere and in astrophysical environments. 'Multi-scale Dynamical Processes in Space and Astrophysical Plasmas' presents the proceedings of the International Astrophysics Forum Alpbach 2011. The contributions discuss the latest advances in the exploration of dynamical behavior in space plasmas environm...

  20. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  1. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  2. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  3. Subjective assessment of impairment in scale-space-coded images

    NARCIS (Netherlands)

    Ridder, de H.; Majoor, G.M.M.

    1988-01-01

    Direct category scaling and a scaling procedure in accordance with Functional Measurement Theory (Anderson, 1982) have been used to assess impairment in scale-space-coded illlages, displayed on a black-and-white TV monitor. The image of a complex scene was passed through a Gaussian filter of limited

  4. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    International Nuclear Information System (INIS)

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  5. Conceptual Design and Demonstration of Space Scale for Measuring Mass in Microgravity Environment

    Directory of Open Access Journals (Sweden)

    Youn-Kyu Kim

    2015-12-01

    Full Text Available In this study, a new idea for developing a space scale for measuring mass in a microgravity environment was proposed by using the inertial force properties of an object to measure its mass. The space scale detected the momentum change of the specimen and reference masses by using a load-cell sensor as the force transducer based on Newton’s laws of motion. In addition, the space scale calculated the specimen mass by comparing the inertial forces of the specimen and reference masses in the same acceleration field. By using this concept, a space scale with a capacity of 3 kg based on the law of momentum conservation was implemented and demonstrated under microgravity conditions onboard International Space Station (ISS with an accuracy of ±1 g. By the performance analysis on the space scale, it was verified that an instrument with a compact size could be implemented and be quickly measured with a reasonable accuracy under microgravity conditions.

  6. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Fractional Sobolev’s Spaces on Time Scales via Conformable Fractional Calculus and Their Application to a Fractional Differential Equation on Time Scales

    Directory of Open Access Journals (Sweden)

    Yanning Wang

    2016-01-01

    Full Text Available Using conformable fractional calculus on time scales, we first introduce fractional Sobolev spaces on time scales, characterize them, and define weak conformable fractional derivatives. Second, we prove the equivalence of some norms in the introduced spaces and derive their completeness, reflexivity, uniform convexity, and compactness of some imbeddings, which can be regarded as a novelty item. Then, as an application, we present a recent approach via variational methods and critical point theory to obtain the existence of solutions for a p-Laplacian conformable fractional differential equation boundary value problem on time scale T:  Tα(Tαup-2Tα(u(t=∇F(σ(t,u(σ(t, Δ-a.e.  t∈a,bTκ2, u(a-u(b=0, Tα(u(a-Tα(u(b=0, where Tα(u(t denotes the conformable fractional derivative of u of order α at t, σ is the forward jump operator, a,b∈T,  01, and F:[0,T]T×RN→R. By establishing a proper variational setting, we obtain three existence results. Finally, we present two examples to illustrate the feasibility and effectiveness of the existence results.

  8. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  9. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  10. SPACE BASED INTERCEPTOR SCALING

    Energy Technology Data Exchange (ETDEWEB)

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  11. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  12. Approach to developing reliable space reactor power systems

    International Nuclear Information System (INIS)

    Mondt, J.F.; Shinbrot, C.H.

    1991-01-01

    The Space Reactor Power System Project is in the engineering development phase of a three-phase program. During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described in this paper along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top down systems approach which includes a point design based on a detailed technical specification of a 100 kW power system

  13. SWIFF: Space weather integrated forecasting framework

    Directory of Open Access Journals (Sweden)

    Frederiksen Jacob Trier

    2013-02-01

    Full Text Available SWIFF is a project funded by the Seventh Framework Programme of the European Commission to study the mathematical-physics models that form the basis for space weather forecasting. The phenomena of space weather span a tremendous scale of densities and temperature with scales ranging 10 orders of magnitude in space and time. Additionally even in local regions there are concurrent processes developing at the electron, ion and global scales strongly interacting with each other. The fundamental challenge in modelling space weather is the need to address multiple physics and multiple scales. Here we present our approach to take existing expertise in fluid and kinetic models to produce an integrated mathematical approach and software infrastructure that allows fluid and kinetic processes to be modelled together. SWIFF aims also at using this new infrastructure to model specific coupled processes at the Solar Corona, in the interplanetary space and in the interaction at the Earth magnetosphere.

  14. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  15. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  16. Geo-spatial Cognition on Human's Social Activity Space Based on Multi-scale Grids

    Directory of Open Access Journals (Sweden)

    ZHAI Weixin

    2016-12-01

    Full Text Available Widely applied location aware devices, including mobile phones and GPS receivers, have provided great convenience for collecting large volume individuals' geographical information. The researches on the human's society behavior space has attracts an increasingly number of researchers. In our research, based on location-based Flickr data From 2004 to May, 2014 in China, we choose five levels of spatial grids to form the multi-scale frame for investigate the correlation between the scale and the geo-spatial cognition on human's social activity space. The HT-index is selected as the fractal inspired by Alexander to estimate the maturity of the society activity on different scales. The results indicate that that the scale characteristics are related to the spatial cognition to a certain extent. It is favorable to use the spatial grid as a tool to control scales for geo-spatial cognition on human's social activity space.

  17. Constructive approaches to the space NPP designing

    International Nuclear Information System (INIS)

    Eremin, A.G.; Korobkov, L.S.; Matveev, A.V.; Trukhanov, Yu.L.; Pyshko, A.P.

    2000-01-01

    An example of designing a space NPP intended for power supply of telecommunication satellite is considered. It is shown that the designing approach based on the introduction of a leading criterion and dividing the design problems in two independent groups (reactor with radiation shield and equipment module) permits to develop the optimal design of a space NPP [ru

  18. Approaching space-time through velocity in doubly special relativity

    International Nuclear Information System (INIS)

    Aloisio, R.; Galante, A.; Grillo, A.F.; Luzio, E.; Mendez, F.

    2004-01-01

    We discuss the definition of velocity as dE/d vertical bar p vertical bar, where E, p are the energy and momentum of a particle, in doubly special relativity (DSR). If this definition matches dx/dt appropriate for the space-time sector, then space-time can in principle be built consistently with the existence of an invariant length scale. We show that, within different possible velocity definitions, a space-time compatible with momentum-space DSR principles cannot be derived

  19. Mapping social values for urban green spaces using Public Participation GIS: the influence of spatial scale and implications for landscape planning.

    Science.gov (United States)

    Ives, Christopher

    2015-04-01

    Measuring social values for landscapes is an emerging field of research and is critical to the successful management of urban ecosystems. Green open space planning has traditionally relied on rigid standards and metrics without considering the physical requirements of green spaces that are valued for different reasons and by different people. Relating social landscape values to key environmental variables provides a much stronger evidence base for planning landscapes that are both socially desirable and environmentally sustainable. This study spatially quantified residents' values for green space in the Lower Hunter Valley of New South Wales, Australia by enabling participants to mark their values for specific open spaces on interactive paper maps. The survey instrument was designed to evaluate the effect of spatial scale by providing maps of residents' local area at both suburb and municipality scales. The importance of open space values differed depending on whether they were indicated via marker dots or reported on in a general aspatial sense. This suggests that certain open space functions were inadequately provided for in the local area (specifically, cultural significance and health/therapeutic value). Additionally, all value types recorded a greater abundance of marker dots at the finer (suburb) scale compared to the coarser (municipality) scale, but this pattern was more pronounced for some values than others (e.g. physical exercise value). Finally, significant relationships were observed between the abundance of value marker dots in parks and their environmental characteristics (e.g. percentage of vegetation). These results have interesting implications when considering the compatibility between different functions of green spaces and how planners can incorporate information about social values with more traditional approaches to green space planning.

  20. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  1. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  2. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  3. Toward multi-scale simulation of reconnection phenomena in space plasma

    Science.gov (United States)

    Den, M.; Horiuchi, R.; Usami, S.; Tanaka, T.; Ogawa, T.; Ohtani, H.

    2013-12-01

    Magnetic reconnection is considered to play an important role in space phenomena such as substorm in the Earth's magnetosphere. It is well known that magnetic reconnection is controlled by microscopic kinetic mechanism. Frozen-in condition is broken due to particle kinetic effects and collisionless reconnection is triggered when current sheet is compressed as thin as ion kinetic scales under the influence of external driving flow. On the other hand configuration of the magnetic field leading to formation of diffusion region is determined in macroscopic scale and topological change after reconnection is also expressed in macroscopic scale. Thus magnetic reconnection is typical multi-scale phenomenon and microscopic and macroscopic physics are strongly coupled. Recently Horiuchi et al. developed an effective resistivity model based on particle-in-cell (PIC) simulation results obtained in study of collisionless driven reconnection and applied to a global magnetohydrodynamics (MHD) simulation of substorm in the Earth's magnetosphere. They showed reproduction of global behavior in substrom such as dipolarization and flux rope formation by global three dimensional MHD simulation. Usami et al. developed multi-hierarchy simulation model, in which macroscopic and microscopic physics are solved self-consistently and simultaneously. Based on the domain decomposition method, this model consists of three parts: a MHD algorithm for macroscopic global dynamics, a PIC algorithm for microscopic kinetic physics, and an interface algorithm to interlock macro and micro hierarchies. They verified the interface algorithm by simulation of plasma injection flow. In their latest work, this model was applied to collisionless reconnection in an open system and magnetic reconnection was successfully found. In this paper, we describe our approach to clarify multi-scale phenomena and report the current status. Our recent study about extension of the MHD domain to global system is presented. We

  4. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System

    Directory of Open Access Journals (Sweden)

    Eunah Choi

    2017-07-01

    Full Text Available Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  5. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System.

    Science.gov (United States)

    Choi, Eunah; Lee, Sangyoon; Hong, Hyunki

    2017-07-21

    Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component) regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG) or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  6. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  7. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  8. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    Science.gov (United States)

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  9. Optimal Scale Edge Detection Utilizing Noise within Images

    Directory of Open Access Journals (Sweden)

    Adnan Khashman

    2003-04-01

    Full Text Available Edge detection techniques have common problems that include poor edge detection in low contrast images, speed of recognition and high computational cost. An efficient solution to the edge detection of objects in low to high contrast images is scale space analysis. However, this approach is time consuming and computationally expensive. These expenses can be marginally reduced if an optimal scale is found in scale space edge detection. This paper presents a new approach to detecting objects within images using noise within the images. The novel idea is based on selecting one optimal scale for the entire image at which scale space edge detection can be applied. The selection of an ideal scale is based on the hypothesis that "the optimal edge detection scale (ideal scale depends on the noise within an image". This paper aims at providing the experimental evidence on the relationship between the optimal scale and the noise within images.

  10. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    Science.gov (United States)

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  11. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  12. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    Science.gov (United States)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios

  13. New approximation of a scale space kernel on SE(3) and applications in neuroimaging

    NARCIS (Netherlands)

    Portegies, J.M.; Sanguinetti, G.R.; Meesters, S.P.L.; Duits, R.

    2015-01-01

    We provide a new, analytic kernel for scale space filtering of dMRI data. The kernel is an approximation for the Green's function of a hypo-elliptic diffusion on the 3D rigid body motion group SE(3), for fiber enhancement in dMRI. The enhancements are described by linear scale space PDEs in the

  14. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  15. State-space approach for evaluating the soil-plant-atmosphere system

    International Nuclear Information System (INIS)

    Timm, L.C.; Reichardt, K.; Cassaro, F.A.M.; Tominaga, T.T.; Bacchi, O.O.S.; Oliveira, J.C.M.; Dourado-Neto, D.

    2004-01-01

    Using as examples one sugarcane and one forage oat experiment, both carried out in the State of Sao Paulo, Brazil, this chapter presents recent state-space approaches used to evaluate the relation between soil and plant properties. A contrast is made between classical statistics methodologies that do not take into account the sampling position coordinates, and the more recently used methodologies which include the position coordinates, and allow a better interpretation of the field-sampled data. Classical concepts are first introduced, followed by spatially referenced methodologies like the autocorrelation function, the cross correlation function, and the state-space approach. Two variations of the state-space approach are given: one emphasizes the evolution of the state system while the other based on the bayesian formulation emphasizes the evolution of the estimated observations. It is concluded that these state-space analyses using dynamic regression models improve data analyses and are therefore recommended for analyzing time and space data series related to the performance of a given soil-plant-atmosphere system. (author)

  16. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  17. From Planetary Boundaries to national fair shares of the global safe operating space - How can the scales be bridged?

    Science.gov (United States)

    Häyhä, Tiina; Cornell, Sarah; Lucas, Paul; van Vuuren, Detlef; Hoff, Holger

    2016-04-01

    The planetary boundaries framework proposes precautionary quantitative global limits to the anthropogenic perturbation of crucial Earth system processes. In this way, it marks out a planetary 'safe operating space' for human activities. However, decisions regarding resource use and emissions are mostly made at much smaller scales, mostly by (sub-)national and regional governments, businesses, and other local actors. To operationalize the planetary boundaries, they need to be translated into and aligned with targets that are relevant at these smaller scales. In this paper, we develop a framework that addresses the three dimension of bridging across scales: biophysical, socio-economic and ethical, to provide a consistent universally applicable approach for translating the planetary boundaries into national level context-specific and fair shares of the safe operating space. We discuss our findings in the context of previous studies and their implications for future analyses and policymaking. In this way, we help link the planetary boundaries framework to widely- applied operational and policy concepts for more robust strong sustainability decision-making.

  18. Generalized Wigner functions in curved spaces: A new approach

    International Nuclear Information System (INIS)

    Kandrup, H.E.

    1988-01-01

    It is well known that, given a quantum field in Minkowski space, one can define Wigner functions f/sub W//sup N/(x 1 ,p 1 ,...,x/sub N/,p/sub N/) which (a) are convenient to analyze since, unlike the field itself, they are c-number quantities and (b) can be interpreted in a limited sense as ''quantum distribution functions.'' Recently, Winter and Calzetta, Habib and Hu have shown one way in which these flat-space Wigner functions can be generalized to a curved-space setting, deriving thereby approximate kinetic equations which make sense ''quasilocally'' for ''short-wavelength modes.'' This paper suggests a completely orthogonal approach for defining curved-space Wigner functions which generalizes instead an object such as the Fourier-transformed f/sub W/ 1 (k,p), which is effectively a two-point function viewed in terms of the ''natural'' creation and annihilation operators a/sup dagger/(p-(12k) and a(p+(12k). The approach suggested here lacks the precise phase-space interpretation implicit in the approach of Winter or Calzetta, Habib, and Hu, but it is useful in that (a) it is geared to handle any ''natural'' mode decomposition, so that (b) it can facilitate exact calculations at least in certain limits, such as for a source-free linear field in a static spacetime

  19. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  20. Phase space properties of local observables and structure of scaling limits

    International Nuclear Information System (INIS)

    Buchholz, D.

    1995-05-01

    For any given algebra of local observables in relativistic quantum field theory there exists an associated scaling algebra which permits one to introduce renormalization group transformations and to construct the scaling (short distance) limit of the theory. On the basis of this result it is discussed how the phase space properties of a theory determine the structure of its scaling limit. Bounds on the number of local degrees of freedom appearing in the scaling limit are given which allow one to distinguish between theories with classical and quantum scaling limits. The results can also be used to establish physically significant algebraic properties of the scaling limit theories, such as the split property. (orig.)

  1. Approaches to Outdoor Thermal Comfort Thresholds through Public Space Design: A Review

    Directory of Open Access Journals (Sweden)

    Andre Santos Nouri

    2018-03-01

    Full Text Available Based on the Köppen Geiger (KG classification system, this review article examines existing studies and projects that have endeavoured to address local outdoor thermal comfort thresholds through Public Space Design (PSD. The review is divided into two sequential stages, whereby (1 overall existing approaches to pedestrian thermal comfort thresholds are reviewed within both quantitative and qualitative spectrums; and (2 the different techniques and measures are reviewed and framed into four Measure Review Frameworks (MRFs, in which each type of PSD measure is presented alongside its respective local scale urban specificities/conditions and their resulting thermal attenuation outcomes. The result of this review article is the assessment of how current practices of PSD within three specific subcategories of the KG ‘Temperate’ group have addressed microclimatic aggravations such as elevated urban temperatures and Urban Heat Island (UHI effects. Based upon a bottom-up approach, the interdisciplinary practice of PSD is hence approached as a means to address existing and future thermal risk factors within the urban public realm in an era of potential climate change.

  2. Mapping the Hot Spots: A Zoning Approach to Space Analysis and Design

    Science.gov (United States)

    Bunnell, Adam; Carpenter, Russell; Hensley, Emily; Strong, Kelsey; Williams, ReBecca; Winter, Rachel

    2016-01-01

    This article examines a preliminary approach to space design developed and implemented in Eastern Kentucky University's Noel Studio for Academic Creativity. The approach discussed here is entitled "hot spots," which has allowed the research team to observe trends in space usage and composing activities among students. This approach has…

  3. Next Generation Space Interconnect Standard (NGSIS): a modular open standards approach for high performance interconnects for space

    Science.gov (United States)

    Collier, Charles Patrick

    2017-04-01

    The Next Generation Space Interconnect Standard (NGSIS) effort is a Government-Industry collaboration effort to define a set of standards for interconnects between space system components with the goal of cost effectively removing bandwidth as a constraint for future space systems. The NGSIS team has selected the ANSI/VITA 65 OpenVPXTM standard family for the physical baseline. The RapidIO protocol has been selected as the basis for the digital data transport. The NGSIS standards are developed to provide sufficient flexibility to enable users to implement a variety of system configurations, while meeting goals for interoperability and robustness for space. The NGSIS approach and effort represents a radical departure from past approaches to achieve a Modular Open System Architecture (MOSA) for space systems and serves as an exemplar for the civil, commercial, and military Space communities as well as a broader high reliability terrestrial market.

  4. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  5. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    Science.gov (United States)

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Scale Space Methods for Analysis of Type 2 Diabetes Patients' Blood Glucose Values

    Directory of Open Access Journals (Sweden)

    Stein Olav Skrøvseth

    2011-01-01

    Full Text Available We describe how scale space methods can be used for quantitative analysis of blood glucose concentrations from type 2 diabetes patients. Blood glucose values were recorded voluntarily by the patients over one full year as part of a self-management process, where the time and frequency of the recordings are decided by the patients. This makes a unique dataset in its extent, though with a large variation in reliability of the recordings. Scale space and frequency space techniques are suited to reveal important features of unevenly sampled data, and useful for identifying medically relevant features for use both by patients as part of their self-management process, and provide useful information for physicians.

  7. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    Science.gov (United States)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  8. Stochastic inflation: Quantum phase-space approach

    International Nuclear Information System (INIS)

    Habib, S.

    1992-01-01

    In this paper a quantum-mechanical phase-space picture is constructed for coarse-grained free quantum fields in an inflationary universe. The appropriate stochastic quantum Liouville equation is derived. Explicit solutions for the phase-space quantum distribution function are found for the cases of power-law and exponential expansions. The expectation values of dynamical variables with respect to these solutions are compared to the corresponding cutoff regularized field-theoretic results (we do not restrict ourselves only to left-angle Φ 2 right-angle). Fair agreement is found provided the coarse-graining scale is kept within certain limits. By focusing on the full phase-space distribution function rather than a reduced distribution it is shown that the thermodynamic interpretation of the stochastic formalism faces several difficulties (e.g., there is no fluctuation-dissipation theorem). The coarse graining does not guarantee an automatic classical limit as quantum correlations turn out to be crucial in order to get results consistent with standard quantum field theory. Therefore, the method does not by itself constitute an explanation of the quantum to classical transition in the early Universe. In particular, we argue that the stochastic equations do not lead to decoherence

  9. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  10. Scale Effect of Premixed Methane-Air Combustion in Confined Space Using LES Model

    Directory of Open Access Journals (Sweden)

    Liang Wang

    2015-12-01

    Full Text Available Gas explosion is the most hazardous incident occurring in underground airways. Computational Fluid Dynamics (CFD techniques are sophisticated in simulating explosions in confined spaces; specifically, when testing large-scale gaseous explosions, such as methane explosions in underground mines. The dimensions of a confined space where explosions could occur vary significantly. Thus, the scale effect on explosion parameters is worth investigating. In this paper, the impact of scaling on explosion overpressures is investigated by employing two scaling factors: The Gas-fill Length Scaling Factor (FLSF and the Hydraulic Diameter Scaling Factor (HDSF. The combinations of eight FLSFs and five HDSFs will cover a wide range of space dimensions where flammable gas could accumulate. Experiments were also conducted to evaluate the selected numerical models. The Large Eddy Simulation turbulence model was selected because it shows accuracy compared to the widely used Reynolds’ averaged models for the scenarios investigated in the experiments. Three major conclusions can be drawn: (1 The overpressure increases with both FLSF and HDSF within the deflagration regime; (2 In an explosion duct with a length to diameter ratio greater than 54, detonation is more likely to be triggered for a stoichiometric methane/air mixture; (3 Overpressure increases as an increment hydraulic diameter of a geometry within deflagration regime. A relative error of 7% is found when predicting blast peak overpressure for the base case compared to the experiment; a good agreement for the wave arrival time is also achieved.

  11. General background and approach to multibody dynamics for space applications

    Science.gov (United States)

    Santini, Paolo; Gasbarri, Paolo

    2009-06-01

    Multibody dynamics for space applications is dictated by space environment such as space-varying gravity forces, orbital and attitude perturbations, control forces if any. Several methods and formulations devoted to the modeling of flexible bodies undergoing large overall motions were developed in recent years. Most of these different formulations were aimed to face one of the main problems concerning the analysis of spacecraft dynamics namely the reduction of computer simulation time. By virtue of this, the use of symbolic manipulation, recursive formulation and parallel processing algorithms were proposed. All these approaches fall into two categories, the one based on Newton/Euler methods and the one based on Lagrangian methods; both of them have their advantages and disadvantages although in general, Newtonian approaches lend to a better understanding of the physics of problems and in particular of the magnitude of the reactions and of the corresponding structural stresses. Another important issue which must be addressed carefully in multibody space dynamics is relevant to a correct choice of kinematics variables. In fact, when dealing with flexible multibody system the resulting equations include two different types of state variables, the ones associated with large (rigid) displacements and the ones associated with elastic deformations. These two sets of variables have generally two different time scales if we think of the attitude motion of a satellite whose period of oscillation, due to the gravity gradient effects, is of the same order of magnitude as the orbital period, which is much bigger than the one associated with the structural vibration of the satellite itself. Therefore, the numerical integration of the equations of the system represents a challenging problem. This was the abstract and some of the arguments that Professor Paolo Santini intended to present for the Breakwell Lecture; unfortunately a deadly disease attacked him and shortly took him

  12. The +vbar breakout during approach to Space Station Freedom

    Science.gov (United States)

    Dunham, Scott D.

    1993-01-01

    A set of burn profiles was developed to provide bounding jet firing histories for a +vbar breakout during approaches to Space Station Freedom. The delta-v sequences were designed to place the Orbiter on a safe trajectory under worst case conditions and to try to minimize plume impingement on Space Station Freedom structure.

  13. a Web Service Approach for Linking Sensors and Cellular Spaces

    Science.gov (United States)

    Isikdag, U.

    2013-09-01

    More and more devices are starting to be connected to the Internet. In the future the Internet will not only be a communication medium for people, it will in fact be a communication environment for devices. The connected devices which are also referred as Things will have an ability to interact with other devices over the Internet, i.) provide information in interoperable form and ii.) consume /utilize such information with the help of sensors embedded in them. This overall concept is known as Internet-of- Things (IoT). This requires new approaches to be investigated for system architectures to establish relations between spaces and sensors. The research presented in this paper elaborates on an architecture developed with this aim, i.e. linking spaces and sensors using a RESTful approach. The objective is making spaces aware of (sensor-embedded) devices, and making devices aware of spaces in a loosely coupled way (i.e. a state/usage/function change in the spaces would not have effect on sensors, similarly a location/state/usage/function change in sensors would not have any effect on spaces). The proposed architecture also enables the automatic assignment of sensors to spaces depending on space geometry and sensor location.

  14. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  15. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  16. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  17. Solar chimney: A sustainable approach for ventilation and building space conditioning

    Directory of Open Access Journals (Sweden)

    Lal, S.,

    2013-03-01

    Full Text Available The residential and commercial buildings demand increase with rapidly growing population. It leads to the vertical growth of the buildings and needs proper ventilation and day-lighting. The natural air ventilation system is not significantly works in conventional structure, so fans and air conditioners are mandatory to meet the proper ventilation and space conditioning. Globally building sector consumed largest energy and utmost consumed in heating, ventilation and space conditioning. This load can be reduced by application of solar chimney and integrated approaches in buildings for heating, ventilation and space conditioning. It is a sustainable approach for these applications in buildings. The authors are reviewed the concept, various method of evaluation, modelings and performance of solar chimney variables, applications and integrated approaches.

  18. Scale-dependent Patterns in One-dimensional Fracture Spacing and Aperture Data

    Science.gov (United States)

    Roy, A.; Perfect, E.

    2013-12-01

    One-dimensional scanline data about fracture spacing and size attributes such as aperture or length are mostly considered in separate studies that compute the cumulative frequency of these attributes without regard to their actual spatial sequence. In a previous study, we showed that spacing data can be analyzed using lacunarity to identify whether fractures occur in clusters. However, to determine if such clusters also contain the largest fractures in terms of a size attribute such as aperture, it is imperative that data about the size attribute be integrated with information about fracture spacing. While for example, some researchers have considered aperture in conjunction with spacing, their analyses were either applicable only to a specific type of data (e.g. multifractal) or failed to characterize the data at different scales. Lacunarity is a technique for analyzing multi-scale non-binary data and is ideally-suited for characterizing scanline data with spacing and aperture values. We present a technique that can statistically delineate the relationship between size attributes and spatial clustering. We begin by building a model scanline that has complete partitioning of fractures with small and large apertures between the intercluster regions and clusters. We demonstrate that the ratio of lacunarity for this model to that of its counterpart for a completely randomized sequence of apertures can be used to determine whether large-aperture fractures preferentially occur next to each other. The technique is then applied to two natural fracture scanline datasets, one with most of the large apertures occurring in fracture clusters, and the other with more randomly-spaced fractures, without any specific ordering of aperture values. The lacunarity ratio clearly discriminates between these two datasets and, in the case of the first example, it is also able to identify the range of scales over which the widest fractures are clustered. The technique thus developed for

  19. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  20. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  1. Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity

    Science.gov (United States)

    Cox, Renee I.; Thomas, L. Dale

    2014-01-01

    Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .

  2. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  3. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  4. The Dynameomics Entropy Dictionary: A Large-Scale Assessment of Conformational Entropy across Protein Fold Space.

    Science.gov (United States)

    Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie

    2017-04-27

    Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.

  5. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy; Jun, Mikyoung; Park, Cheolwoo

    2012-01-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests

  6. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  7. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  8. A behavioral approach to shared mapping of peripersonal space between oneself and others.

    Science.gov (United States)

    Teramoto, Wataru

    2018-04-03

    Recent physiological studies have showed that some visuotactile brain areas respond to other's peripersonal spaces (PPS) as they would their own. This study investigates this PPS remapping phenomenon in terms of human behavior. Participants placed their left hands on a tabletop screen where visual stimuli were projected. A vibrotactile stimulator was attached to the tip of their index finger. While a white disk approached or receded from the hand in the participant's near or far space, the participant was instructed to quickly detect a target (vibrotactile stimulation, change in the moving disk's color or both). When performing this task alone, the participants exhibited shorter detection times when the disk approached the hand in their near space. In contrast, when performing the task with a partner across the table, the participants exhibited shorter detection times both when the disk approached their own hand in their near space and when it approached the partner's hand in the partner's near space but the participants' far space. This phenomenon was also observed when the body parts from which the visual stimuli approached/receded differed between the participant and partner. These results suggest that humans can share PPS representations and/or body-derived attention/arousal mechanisms with others.

  9. Pattern recognition in probability spaces for visualization and identification of plasma confinement regimes and confinement time scaling

    International Nuclear Information System (INIS)

    Verdoolaege, G; Karagounis, G; Oost, G Van; Tendler, M

    2012-01-01

    Pattern recognition is becoming an increasingly important tool for making inferences from the massive amounts of data produced in fusion experiments. The purpose is to contribute to physics studies and plasma control. In this work, we address the visualization of plasma confinement data, the (real-time) identification of confinement regimes and the establishment of a scaling law for the energy confinement time. We take an intrinsically probabilistic approach, modeling data from the International Global H-mode Confinement Database with Gaussian distributions. We show that pattern recognition operations working in the associated probability space are considerably more powerful than their counterparts in a Euclidean data space. This opens up new possibilities for analyzing confinement data and for fusion data processing in general. We hence advocate the essential role played by measurement uncertainty for data interpretation in fusion experiments. (paper)

  10. Zebrafish brain mapping--standardized spaces, length scales, and the power of N and n.

    Science.gov (United States)

    Hunter, Paul R; Hendry, Aenea C; Lowe, Andrew S

    2015-06-01

    Mapping anatomical and functional parameters of the zebrafish brain is moving apace. Research communities undertaking such studies are becoming ever larger and more diverse. The unique features, tools, and technologies associated with zebrafish are propelling them as the 21st century model organism for brain mapping. Uniquely positioned as a vertebrate model system, the zebrafish enables imaging of anatomy and function at different length scales from intraneuronal compartments to sparsely distributed whole brain patterns. With a variety of diverse and established statistical modeling and analytic methods available from the wider brain mapping communities, the richness of zebrafish neuroimaging data is being realized. The statistical power of population observations (N) within and across many samples (n) projected onto a standardized space will provide vast databases for data-driven biological approaches. This article reviews key brain mapping initiatives at different levels of scale that highlight the potential of zebrafish brain mapping. By way of introduction to the next wave of brain mappers, an accessible introduction to the key concepts and caveats associated with neuroimaging are outlined and discussed. © 2014 Wiley Periodicals, Inc.

  11. Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach

    Science.gov (United States)

    Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.

    2018-03-01

    Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.

  12. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  13. Combining different types of scale space interest points using canonical sets

    NARCIS (Netherlands)

    Kanters, F.M.W.; Denton, T.; Shokoufandeh, A.; Florack, L.M.J.; Haar Romenij, ter B.M.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    Scale space interest points capture important photometric and deep structure information of an image. The information content of such points can be made explicit using image reconstruction. In this paper we will consider the problem of combining multiple types of interest points used for image

  14. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  15. Critical spaces for quasilinear parabolic evolution equations and applications

    Science.gov (United States)

    Prüss, Jan; Simonett, Gieri; Wilke, Mathias

    2018-02-01

    We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.

  16. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  17. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  18. A new approach to the analysis of the phase space of f(R)-gravity

    Energy Technology Data Exchange (ETDEWEB)

    Carloni, S., E-mail: sante.carloni@tecnico.ulisboa.pt [Centro Multidisciplinar de Astrofisica—CENTRA, Instituto Superior Tecnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049-001 (Portugal)

    2015-09-01

    We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models.

  19. Lateral skull base approaches in the management of benign parapharyngeal space tumors.

    Science.gov (United States)

    Prasad, Sampath Chandra; Piccirillo, Enrico; Chovanec, Martin; La Melia, Claudio; De Donato, Giuseppe; Sanna, Mario

    2015-06-01

    To evaluate the role of lateral skull base approaches in the management of benign parapharyngeal space tumors and to propose an algorithm for their surgical approach. Retrospective study of patients with benign parapharyngeal space tumors. The clinical features, radiology and preoperative management of skull base neurovasculature, the surgical approaches and overall results were recorded. 46 patients presented with 48 tumors. 12 were prestyloid and 36 poststyloid. 19 (39.6%) tumors were paragangliomas, 15 (31.25%) were schwannomas and 11 (23%) were pleomorphic adenomas. Preoperative embolization was performed in 19, stenting of the internal carotid artery in 4 and permanent balloon occlusion in 2 patients. 19 tumors were approached by the transcervical, 13 by transcervical-transparotid, 5 by transcervical-transmastoid, 6, 1 and 2 tumors by the infratemporal fossa approach types A, B and D, respectively. Total radical tumor removal was achieved in 46 (96%) of the cases. Lateral skull base approaches have an advantage over other approaches in the management of benign tumors of the parapharyngeal space due to the fact that they provide excellent exposure with less morbidity. The use of microscope combined with bipolar cautery reduces morbidity. Stenting of internal carotid artery gives a chance for complete tumor removal with arterial preservation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Approach to transaction management for Space Station Freedom

    Science.gov (United States)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1989-01-01

    An approach to managing the operations of the Space Station Freedom based on their external effects is described. It is assumed that there is a conflict-free schedule that, if followed, will allow only appropriate operations to occur. The problem is then reduced to that of ensuring that the operations initiated are within the limits allowed by the schedule, or that the external effects of such operations are within those allowed by the schedule. The main features of the currently adopted transaction management approach are discussed.

  1. Space Weather in the Machine Learning Era: A Multidisciplinary Approach

    Science.gov (United States)

    Camporeale, E.; Wing, S.; Johnson, J.; Jackman, C. M.; McGranaghan, R.

    2018-01-01

    The workshop entitled Space Weather: A Multidisciplinary Approach took place at the Lorentz Center, University of Leiden, Netherlands, on 25-29 September 2017. The aim of this workshop was to bring together members of the Space Weather, Mathematics, Statistics, and Computer Science communities to address the use of advanced techniques such as Machine Learning, Information Theory, and Deep Learning, to better understand the Sun-Earth system and to improve space weather forecasting. Although individual efforts have been made toward this goal, the community consensus is that establishing interdisciplinary collaborations is the most promising strategy for fully utilizing the potential of these advanced techniques in solving Space Weather-related problems.

  2. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  3. An introduction to data reduction: space-group determination, scaling and intensity statistics.

    Science.gov (United States)

    Evans, Philip R

    2011-04-01

    This paper presents an overview of how to run the CCP4 programs for data reduction (SCALA, POINTLESS and CTRUNCATE) through the CCP4 graphical interface ccp4i and points out some issues that need to be considered, together with a few examples. It covers determination of the point-group symmetry of the diffraction data (the Laue group), which is required for the subsequent scaling step, examination of systematic absences, which in many cases will allow inference of the space group, putting multiple data sets on a common indexing system when there are alternatives, the scaling step itself, which produces a large set of data-quality indicators, estimation of |F| from intensity and finally examination of intensity statistics to detect crystal pathologies such as twinning. An appendix outlines the scoring schemes used by the program POINTLESS to assign probabilities to possible Laue and space groups.

  4. A Proposal for the Common Safety Approach of Space Programs

    Science.gov (United States)

    Grimard, Max

    2002-01-01

    For all applications, business and systems related to Space programs, Quality is mandatory and is a key factor for the technical as well as the economical performances. Up to now the differences of applications (launchers, manned space-flight, sciences, telecommunications, Earth observation, planetary exploration, etc.) and the difference of technical culture and background of the leading countries (USA, Russia, Europe) have generally led to different approaches in terms of standards and processes for Quality. At a time where international cooperation is quite usual for the institutional programs and globalization is the key word for the commercial business, it is considered of prime importance to aim at common standards and approaches for Quality in Space Programs. For that reason, the International Academy of Astronautics has set up a Study Group which mandate is to "Make recommendations to improve the Quality, Reliability, Efficiency, and Safety of space programmes, taking into account the overall environment in which they operate : economical constraints, harsh environments, space weather, long life, no maintenance, autonomy, international co-operation, norms and standards, certification." The paper will introduce the activities of this Study Group, describing a first list of topics which should be addressed : Through this paper it is expected to open the discussion to update/enlarge this list of topics and to call for contributors to this Study Group.

  5. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    Science.gov (United States)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  6. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  7. An Effective Approach Control Scheme for the Tethered Space Robot System

    Directory of Open Access Journals (Sweden)

    Zhongjie Meng

    2014-09-01

    Full Text Available The tethered space robot system (TSR, which is composed of a platform, a gripper and a space tether, has great potential in future space missions. Given the relative motion among the platform, tether, gripper and the target, an integrated approach model is derived. Then, a novel coordinated approach control scheme is presented, in which the tether tension, thrusters and the reaction wheel are all utilized. It contains the open-loop trajectory optimization, the feedback trajectory control and attitude control. The numerical simulation results show that the rendezvous between TSR and the target can be realized by the proposed coordinated control scheme, and the propellant consumption is efficiently reduced. Moreover, the control scheme performs well in the presence of the initial state's perturbations, actuator characteristics and sensor errors.

  8. Finite frequency shear wave splitting tomography: a model space search approach

    Science.gov (United States)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  9. Kolmogorov Space in Time Series Data

    OpenAIRE

    Kanjamapornkul, K.; Pinčák, R.

    2016-01-01

    We provide the proof that the space of time series data is a Kolmogorov space with $T_{0}$-separation axiom using the loop space of time series data. In our approach we define a cyclic coordinate of intrinsic time scale of time series data after empirical mode decomposition. A spinor field of time series data comes from the rotation of data around price and time axis by defining a new extradimension to time series data. We show that there exist hidden eight dimensions in Kolmogorov space for ...

  10. Patterns of disturbance at multiple scales in real and simulated landscapes

    Science.gov (United States)

    Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo

    2007-01-01

    We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...

  11. Tracking and visualization of space-time activities for a micro-scale flu transmission study.

    Science.gov (United States)

    Qi, Feng; Du, Fei

    2013-02-07

    Infectious diseases pose increasing threats to public health with increasing population density and more and more sophisticated social networks. While efforts continue in studying the large scale dissemination of contagious diseases, individual-based activity and behaviour study benefits not only disease transmission modelling but also the control, containment, and prevention decision making at the local scale. The potential for using tracking technologies to capture detailed space-time trajectories and model individual behaviour is increasing rapidly, as technological advances enable the manufacture of small, lightweight, highly sensitive, and affordable receivers and the routine use of location-aware devices has become widespread (e.g., smart cellular phones). The use of low-cost tracking devices in medical research has also been proved effective by more and more studies. This study describes the use of tracking devices to collect data of space-time trajectories and the spatiotemporal processing of such data to facilitate micro-scale flu transmission study. We also reports preliminary findings on activity patterns related to chances of influenza infection in a pilot study. Specifically, this study employed A-GPS tracking devices to collect data on a university campus. Spatiotemporal processing was conducted for data cleaning and segmentation. Processed data was validated with traditional activity diaries. The A-GPS data set was then used for visual explorations including density surface visualization and connection analysis to examine space-time activity patterns in relation to chances of influenza infection. When compared to diary data, the segmented tracking data demonstrated to be an effective alternative and showed greater accuracies in time as well as the details of routes taken by participants. A comparison of space-time activity patterns between participants who caught seasonal influenza and those who did not revealed interesting patterns. This study

  12. Toward a global space exploration program: A stepping stone approach

    Science.gov (United States)

    Ehrenfreund, Pascale; McKay, Chris; Rummel, John D.; Foing, Bernard H.; Neal, Clive R.; Masson-Zwaan, Tanja; Ansdell, Megan; Peter, Nicolas; Zarnecki, John; Mackwell, Steve; Perino, Maria Antionetta; Billings, Linda; Mankins, John; Race, Margaret

    2012-01-01

    In response to the growing importance of space exploration in future planning, the Committee on Space Research (COSPAR) Panel on Exploration (PEX) was chartered to provide independent scientific advice to support the development of exploration programs and to safeguard the potential scientific assets of solar system objects. In this report, PEX elaborates a stepwise approach to achieve a new level of space cooperation that can help develop world-wide capabilities in space science and exploration and support a transition that will lead to a global space exploration program. The proposed stepping stones are intended to transcend cross-cultural barriers, leading to the development of technical interfaces and shared legal frameworks and fostering coordination and cooperation on a broad front. Input for this report was drawn from expertise provided by COSPAR Associates within the international community and via the contacts they maintain in various scientific entities. The report provides a summary and synthesis of science roadmaps and recommendations for planetary exploration produced by many national and international working groups, aiming to encourage and exploit synergies among similar programs. While science and technology represent the core and, often, the drivers for space exploration, several other disciplines and their stakeholders (Earth science, space law, and others) should be more robustly interlinked and involved than they have been to date. The report argues that a shared vision is crucial to this linkage, and to providing a direction that enables new countries and stakeholders to join and engage in the overall space exploration effort. Building a basic space technology capacity within a wider range of countries, ensuring new actors in space act responsibly, and increasing public awareness and engagement are concrete steps that can provide a broader interest in space exploration, worldwide, and build a solid basis for program sustainability. By engaging

  13. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir; Seljak, Uroš [Institute for Theoretical Physics, University of Zürich, Zürich (Switzerland); Okumura, Teppei [Institute for the Early Universe, Ewha Womans University, Seoul, S. Korea (Korea, Republic of); Desjacques, Vincent, E-mail: zvlah@physik.uzh.ch, E-mail: seljak@physik.uzh.ch, E-mail: teppei@ewha.ac.kr, E-mail: Vincent.Desjacques@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics (CAP) Université de Genéve, Genéve (Switzerland)

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  14. A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems

    Science.gov (United States)

    Hoag, Lucy M.

    The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The

  15. On the space dimensionality based on metrics

    International Nuclear Information System (INIS)

    Gorelik, G.E.

    1978-01-01

    A new approach to space time dimensionality is suggested, which permits to take into account the possibility of altering dimensionality depending on the phenomenon scale. An attempt is made to give the definition of dimensionality, equivalent to a conventional definition for the Euclidean space and variety. The conventional definition of variety dimensionality is connected with the possibility of homeomorphic reflection of the Euclidean space on some region of each variety point

  16. An integrated mission approach to the space exploration initiative will ensure success

    Science.gov (United States)

    Coomes, Edmund P.; Dagle, Jefferey E.; Bamberger, Judith A.; Noffsinger, Kent E.

    1991-01-01

    The direction of the American space program, as defined by President Bush and the National Commission on Space, is to expand human presence into the solar system. Landing an American on Mars by the 50th anniversary of the Apollo 11 lunar landing is the goal. This challenge has produced a level of excitement among young Americans not seen for nearly three decades. The exploration and settlement of the space frontier will occupy the creative thoughts and energies of generations of Americans well into the next century. The return of Americans to the moon and beyond must be viewed as a national effort with strong public support if it is to become a reality. Key to making this an actuality is the mission approach selected. Developing a permanent presence in space requires a continual stepping outward from Earch in a logical progressive manner. If we seriously plan to go and to stay, then not only must we plan what we are to do and how we are to do it, we must address the logistic support infrastructure that will allow us to stay there once we arrive. A fully integrated approach to mission planning is needed if the Space exploration Initiative (SEI) is to be successful. Only in this way can a permanent human presence in space be sustained. An integrated infrastructure approach would reduce the number of new systems and technologies requiring development. The resultant horizontal commonality of systems and hardware would reduce the direct economic impact of SEI while an early return on investment through technology spin-offs would be an economic benefit by greatly enhancing our international technical competitiveness. If the exploration, development, and colonization of space is to be affordable and acceptable, careful consideration must be given to such things as ``return on investment'' and ``commercial product potential'' of the technologies developed. This integrated approach will win the Congressional support needed to secure the financial backing necessary to assure

  17. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  18. A Belief-Space Approach to Integrated Intelligence - Research Area 10.3: Intelligent Networks

    Science.gov (United States)

    2017-12-05

    A Belief-Space Approach to Integrated Intelligence- Research Area 10.3: Intelligent Networks The views, opinions and/or findings contained in this...Technology (MIT) Title: A Belief-Space Approach to Integrated Intelligence- Research Area 10.3: Intelligent Networks Report Term: 0-Other Email: tlp...students presented progress and received feedback from the research group . o wrote papers on their research and submitted them to leading conferences

  19. Determining the scale in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bornyakov, V.G. [Institute for High Energy Physics, Protvino (Russian Federation); Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Far Eastern Federal Univ., Vladivostok (Russian Federation). School of Biomedicine; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Hudspith, R. [York Univ., Toronto, ON (Canada). Dept. of Physics and Astronomy; and others

    2015-12-15

    We discuss scale setting in the context of 2+1 dynamical fermion simulations where we approach the physical point in the quark mass plane keeping the average quark mass constant. We have simulations at four beta values, and after determining the paths and lattice spacings, we give an estimation of the phenomenological values of various Wilson flow scales.

  20. Urban Multisensory Laboratory, AN Approach to Model Urban Space Human Perception

    Science.gov (United States)

    González, T.; Sol, D.; Saenz, J.; Clavijo, D.; García, H.

    2017-09-01

    An urban sensory lab (USL or LUS an acronym in Spanish) is a new and avant-garde approach for studying and analyzing a city. The construction of this approach allows the development of new methodologies to identify the emotional response of public space users. The laboratory combines qualitative analysis proposed by urbanists and quantitative measures managed by data analysis applications. USL is a new approach to go beyond the borders of urban knowledge. The design thinking strategy allows us to implement methods to understand the results provided by our technique. In this first approach, the interpretation is made by hand. However, our goal is to combine design thinking and machine learning in order to analyze the qualitative and quantitative data automatically. Now, the results are being used by students from the Urbanism and Architecture courses in order to get a better understanding of public spaces in Puebla, Mexico and its interaction with people.

  1. Quantum scaling in many-body systems an approach to quantum phase transitions

    CERN Document Server

    Continentino, Mucio

    2017-01-01

    Quantum phase transitions are strongly relevant in a number of fields, ranging from condensed matter to cold atom physics and quantum field theory. This book, now in its second edition, approaches the problem of quantum phase transitions from a new and unifying perspective. Topics addressed include the concepts of scale and time invariance and their significance for quantum criticality, as well as brand new chapters on superfluid and superconductor quantum critical points, and quantum first order transitions. The renormalisation group in real and momentum space is also established as the proper language to describe the behaviour of systems close to a quantum phase transition. These phenomena introduce a number of theoretical challenges which are of major importance for driving new experiments. Being strongly motivated and oriented towards understanding experimental results, this is an excellent text for graduates, as well as theorists, experimentalists and those with an interest in quantum criticality.

  2. Space Culture: Innovative Cultural Approaches To Public Engagement With Astronomy, Space Science And Astronautics

    Science.gov (United States)

    Malina, Roger F.

    2012-01-01

    In recent years a number of cultural organizations have established ongoing programs of public engagement with astronomy, space science and astronautics. Many involve elements of citizen science initiatives, artists’ residencies in scientific laboratories and agencies, art and science festivals, and social network projects as well as more traditional exhibition venues. Recognizing these programs several agencies and organizations have established mechanisms for facilitating public engagement with astronomy and space science through cultural activities. The International Astronautics Federation has established an Technical Activities Committee for the Cultural Utilization of Space. Over the past year the NSF and NEA have organized disciplinary workshops to develop recommendations relating to art-science interaction and community building efforts. Rationales for encouraging public engagement via cultural projects range from theory of creativity, innovation and invention to cultural appropriation in the context of `socially robust science’ as advocated by Helga Nowotny of the European Research Council. Public engagement with science, as opposed to science education and outreach initiatives, require different approaches. Just as organizations have employed education professionals to lead education activities, so they must employ cultural professionals if they wish to develop public engagement projects via arts and culture. One outcome of the NSF and NEA workshops has been development of a rationale for converting STEM to STEAM by including the arts in STEM methodologies, particularly for K-12 where students can access science via arts and cultural contexts. Often these require new kinds of informal education approaches that exploit locative media, gaming platforms, artists projects and citizen science. Incorporating astronomy and space science content in art and cultural projects requires new skills in `cultural translation’ and `trans-mediation’ and new kinds

  3. ANALYSIS OF RADAR AND OPTICAL SPACE BORNE DATA FOR LARGE SCALE TOPOGRAPHICAL MAPPING

    Directory of Open Access Journals (Sweden)

    W. Tampubolon

    2015-03-01

    Full Text Available Normally, in order to provide high resolution 3 Dimension (3D geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term “Rapid Mapping”. In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar or the usage of the GCPs in both, the optical and the

  4. Global forward-predicting dynamic routing for traffic concurrency space stereo multi-layer scale-free network

    International Nuclear Information System (INIS)

    Xie Wei-Hao; Zhou Bin; Liu En-Xiao; Lu Wei-Dang; Zhou Ting

    2015-01-01

    Many real communication networks, such as oceanic monitoring network and land environment observation network, can be described as space stereo multi-layer structure, and the traffic in these networks is concurrent. Understanding how traffic dynamics depend on these real communication networks and finding an effective routing strategy that can fit the circumstance of traffic concurrency and enhance the network performance are necessary. In this light, we propose a traffic model for space stereo multi-layer complex network and introduce two kinds of global forward-predicting dynamic routing strategies, global forward-predicting hybrid minimum queue (HMQ) routing strategy and global forward-predicting hybrid minimum degree and queue (HMDQ) routing strategy, for traffic concurrency space stereo multi-layer scale-free networks. By applying forward-predicting strategy, the proposed routing strategies achieve better performances in traffic concurrency space stereo multi-layer scale-free networks. Compared with the efficient routing strategy and global dynamic routing strategy, HMDQ and HMQ routing strategies can optimize the traffic distribution, alleviate the number of congested packets effectively and reach much higher network capacity. (paper)

  5. Space-time uncertainty and approaches to D-brane field theory

    International Nuclear Information System (INIS)

    Yoneya, Tamiaki

    2008-01-01

    In connection with the space-time uncertainty principle which gives a simple qualitative characterization of non-local or non-commutative nature of short-distance space-time structure in string theory, the author's recent approaches toward field theories for D-branes are briefly outlined, putting emphasis on some key ideas lying in the background. The final section of the present report is devoted partially to a tribute to Yukawa on the occasion of the centennial of his birth. (author)

  6. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  7. An integrated mission approach to the space exploration initiative will ensure success

    International Nuclear Information System (INIS)

    Coomes, E.P.; Dagle, J.E.; Bamberger, J.A.; Noffsinger, K.E.

    1991-01-01

    The direction of the American space program, as defined by President Bush and the National Commission on Space, is to expand human presence into the solar system. Landing an American on Mars by the 50th anniversary of the Apollo 11 lunar landing is the goal. This challenge has produced a level of excitement among young Americans not seen for nearly three decades. The exploration and settlement of the space frontier will occupy the creative thoughts and energies of generations of Americans well into the next century. The return of Americans to the moon and beyond must be viewed as a national effort with strong public support if it is to become a reality. Key to making this an actuality is the mission approach selected. Developing a permanent presence in space requires a continual stepping outward from Earch in a logical progressive manner. If we seriously plan to go and to stay, then not only must we plan what we are to do and how we are to do it, we must address the logistic support infrastructure that will allow us to stay there once we arrive. A fully integrated approach to mission planning is needed if the Space exploration Initiative (SEI) is to be successful. Only in this way can a permanent human presence in space be sustained. An integrated infrastructure approach would reduce the number of new systems and technologies requiring development. The resultant horizontal commonality of systems and hardware would reduce the direct economic impact of SEI while an early return on investment through technology spin-offs would be an economic benefit by greatly enhancing our international technical competitiveness. If the exploration, development, and colonization of space is to be affordable and acceptable, careful consideration must be given to such things as ''return on investment'' and ''commercial product potential'' of the technologies developed

  8. Approach to an Affordable and Sustainable Space Transportation System

    Science.gov (United States)

    McCleskey, Caey M.; Rhodes, R. E.; Robinson, J. W.; Henderson, E. M.

    2012-01-01

    This paper describes an approach and a general procedure for creating space transportation architectural concepts that are at once affordable and sustainable. Previous papers by the authors and other members of the Space Propulsion Synergy Team (SPST) focused on a functional system breakdown structure for an architecture and definition of high-payoff design techniques with a technology integration strategy. This paper follows up by using a structured process that derives architectural solutions focused on achieving life cycle affordability and sustainability. Further, the paper includes an example concept that integrates key design techniques discussed in previous papers. !

  9. State space approach to mixed boundary value problems.

    Science.gov (United States)

    Chen, C. F.; Chen, M. M.

    1973-01-01

    A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.

  10. Approach to transaction management for Space Station Freedom

    Science.gov (United States)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1990-01-01

    The Space Station Freedom Manned Base (SSFMB) will support the operation of the many payloads that may be located within the pressurized modules or on external attachment points. The transaction management (TM) approach presented provides a set of overlapping features that will assure the effective and safe operation of the SSFMB and provide a schedule that makes potentially hazardous operations safe, allocates resources within the capability of the resource providers, and maintains an environment conducive to the operations planned. This approach provides for targets of opportunity and schedule adjustments that give the operators the flexibility to conduct a vast majority of their operations with no conscious involvement with the TM function.

  11. Multi-scale approach to the mechanical behavior of SiC/SiC composites: the concept of mini-composite

    International Nuclear Information System (INIS)

    Lamon, J.

    2007-01-01

    Full text of publication follows: The concept of composite materials is very powerful, since one can tailor the properties with respect to end use applications, through a sound combination of constituents, including fibre, matrix and inter-phases. Ceramic matrix composites (CMCs) are at the forefront of advanced materials technology because of their light weight, high strength and toughness, high temperature capabilities and graceful failure under loading. This key behaviour is achieved by proper design of the fiber/matrix interface which helps in arresting and deflecting the cracks formed in the brittle matrix under load and preventing the early failure of the fiber arrangement. Ceramic matrix composites are considered as enabling technology for advanced aero-propulsion, space power, aerospace vehicles, space structures, ground transportation, as well as nuclear and chemical industries. During the last 30 years, tremendous progress has been made in the development of CMCs. Much research work has been conducted by LCTS on those SiC/SiC composites made via Chemical Vapor Infiltration. A multi-scale approach to mechanical behaviour has been developed. This multi-scale approach is aimed at relating the mechanical behaviour at macroscopic scale to constituent properties. It involves experiments and modelling. It allows chemical effects to be introduced in the models of mechanical behaviour. The present paper discusses the main features of the mechanical behaviour of textile SiC/SiC composites. These features are related to composite microstructure, properties of constituents (fibers, matrix and interphase) and fiber arrangement. Relationships between properties at different scales are established. Then the mini-composite concept is addressed. This concept is very powerful for composite design and investigation. Mini-composites consist of unidirectional composites reinforced by multi-filament tows. Mini-composites represent the mesoscale of textile composites. In

  12. Comparing Laser Interferometry and Atom Interferometry Approaches to Space-Based Gravitational-Wave Measurement

    Science.gov (United States)

    Baker, John; Thorpe, Ira

    2012-01-01

    Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.

  13. Large Scale System Safety Integration for Human Rated Space Vehicles

    Science.gov (United States)

    Massie, Michael J.

    2005-12-01

    Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve

  14. Biased Tracers in Redshift Space in the EFT of Large-Scale Structure

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.

    2016-10-28

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.

  15. Stage I surface crack formation in thermal fatigue: A predictive multi-scale approach

    International Nuclear Information System (INIS)

    Osterstock, S.; Robertson, C.; Sauzay, M.; Aubin, V.; Degallaix, S.

    2010-01-01

    A multi-scale numerical model is developed, predicting the formation of stage I cracks, in thermal fatigue loading conditions. The proposed approach comprises 2 distinct calculation steps. Firstly, the number of cycles to micro-crack initiation is determined, in individual grains. The adopted initiation model depends on local stress-strain conditions, relative to sub-grain plasticity, grain orientation and grain deformation incompatibilities. Secondly, 2-4 grains long surface cracks (stage I) is predicted, by accounting for micro-crack coalescence, in 3 dimensions. The method described in this paper is applied to a 500 grains aggregate, loaded in representative thermal fatigue conditions. Preliminary results provide quantitative insight regarding position, density, spacing and orientations of stage I surface cracks and subsequent formation of crack networks. The proposed method is fully deterministic, provided all grain crystallographic orientations and micro-crack linking thresholds are specified. (authors)

  16. Millimeterwave Space Power Grid architecture development 2012

    Science.gov (United States)

    Komerath, Narayanan; Dessanti, Brendan; Shah, Shaan

    This is an update of the Space Power Grid architecture for space-based solar power with an improved design of the collector/converter link, the primary heater and the radiator of the active thermal control system. The Space Power Grid offers an evolutionary approach towards TeraWatt-level Space-based solar power. The use of millimeter wave frequencies (around 220GHz) and Low-Mid Earth Orbits shrinks the size of the space and ground infrastructure to manageable levels. In prior work we showed that using Brayton cycle conversion of solar power allows large economies of scale compared to the linear mass-power relationship of photovoltaic conversion. With high-temperature materials permitting 3600 K temperature in the primary heater, over 80 percent cycle efficiency was shown with a closed helium cycle for the 1GW converter satellite which formed the core element of the architecture. Work done since the last IEEE conference has shown that the use of waveguides incorporated into lighter-than-air antenna platforms, can overcome the difficulties in transmitting millimeter wave power through the moist, dense lower atmosphere. A graphene-based radiator design conservatively meets the mass budget for the waste heat rejection system needed for the compressor inlet temperature. Placing the ultralight Mirasol collectors in lower orbits overcomes the solar beam spot size problem of high-orbit collection. The architecture begins by establishing a power exchange with terrestrial renewable energy plants, creating an early revenue generation approach with low investment. The approach allows for technology development and demonstration of high power millimeter wave technology. A multinational experiment using the International Space Station and another power exchange satellite is proposed to gather required data and experience, thus reducing the technical and policy risks. The full-scale architecture deploys pairs of Mirasol sunlight collectors and Girasol 1 GW converter satellites t

  17. Data fusion of multi-scale representations for structural damage detection

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  18. An optimal beam alignment method for large-scale distributed space surveillance radar system

    Science.gov (United States)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  19. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  20. Pre-Big Bang, space-time structure, asymptotic Universe. Spinorial space-time and a new approach to Friedmann-like equations

    Science.gov (United States)

    Gonzalez-Mestres, Luis

    2014-04-01

    Planck and other recent data in Cosmology and Particle Physics can open the way to controversial analyses concerning the early Universe and its possible ultimate origin. Alternatives to standard cosmology include pre-Big Bang approaches, new space-time geometries and new ultimate constituents of matter. Basic issues related to a possible new cosmology along these lines clearly deserve further exploration. The Planck collaboration reports an age of the Universe t close to 13.8 Gyr and a present ratio H between relative speeds and distances at cosmic scale around 67.3 km/s/Mpc. The product of these two measured quantities is then slightly below 1 (about 0.95), while it can be exactly 1 in the absence of matter and cosmological constant in patterns based on the spinorial space-time we have considered in previous papers. In this description of space-time we first suggested in 1996-97, the cosmic time t is given by the modulus of a SU(2) spinor and the Lundmark-Lemaître-Hubble (LLH) expansion law turns out to be of purely geometric origin previous to any introduction of standard matter and relativity. Such a fundamental geometry, inspired by the role of half-integer spin in Particle Physics, may reflect an equilibrium between the dynamics of the ultimate constituents of matter and the deep structure of space and time. Taking into account the observed cosmic acceleration, the present situation suggests that the value of 1 can be a natural asymptotic limit for the product H t in the long-term evolution of our Universe up to possible small corrections. In the presence of a spinorial space-time geometry, no ad hoc combination of dark matter and dark energy would in any case be needed to get an acceptable value of H and an evolution of the Universe compatible with observation. The use of a spinorial space-time naturally leads to unconventional properties for the space curvature term in Friedmann-like equations. It therefore suggests a major modification of the standard

  1. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  2. A state space approach for the eigenvalue problem of marine risers

    KAUST Repository

    Alfosail, Feras

    2017-10-05

    A numerical state-space approach is proposed to examine the natural frequencies and critical buckling limits of marine risers. A large axial tension in the riser model causes numerical limitations. These limitations are overcome by using the modified Gram–Schmidt orthonormalization process as an intermediate step during the numerical integration process with the fourth-order Runge–Kutta scheme. The obtained results are validated against those obtained with other numerical methods, such as the finite-element, Galerkin, and power-series methods, and are found to be in good agreement. The state-space approach is shown to be computationally more efficient than the other methods. Also, we investigate the effect of a high applied tension, a high apparent weight, and higher-order modes on the accuracy of the numerical scheme. We demonstrate that, by applying the orthonormalization process, the stability and convergence of the approach are significantly improved.

  3. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  4. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  5. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  6. European Space Science Scales New Heights

    Science.gov (United States)

    1995-06-01

    about two years' budget and medium-size projects accounting for one years budget. It is on the basis of the Horizon 2000 programme that Europe has: launched the Giotto probe, which successfully encountered Comets Halley (1986) and Grigg-Skjellerup (1992); developed the Hipparcos satellite, whose catalogue of 120 000 stars will be published in late 1996; built the Ulysses probe, which has been exploring the third dimension of the solar system since 1992; and contributed at a rate of 20%to the Hubble Space Telescope programme. It is thanks to Horizon 2000 that Europe is now preparing to launch ISO, Soho and Cluster. It is on the basis of the same long-term plan that Europe will build: Huygens, the probe to be launched in 1997, in co-operation with the United States, to explore the organic planet Titan; XMM, the X-ray telescope scheduled for a launch in 1999; Integral, the gamma-ray observatory due to be launched in 2001 in co-operation with Russia; Rosette, the probe which is to land on Comet Wirtanen in 2012; and FIRST, the submillimetre telescope planned to be in orbit in 2006. After a long and fruitful apprenticeship, European space science therefore now looks set to come into its own. It currently ranks an honourable second place in the world and regularly leads the way in certain specific areas of exploration. Thus Europe is now at the forefront of cometary exploration, fundamental astronomy or "astrometry", solar physics and the physics of interplanetary plasma. So it should also be able to take the lead in infrared astronomy, high- energy astronomy and planetary exploration while continuing to conduct cometary studies with Rosetta. One remarkable fact is that the approach and success of Horizon 2000 have attracted unanimous praise both in and beyond Europe. The programme is being supported by virtually all Europe's scien1ilsts. It is drawing on and inspiring increasing numbers of scientists, including many of the younger generation. Its content and management have

  7. Requirements and approach for a space tourism launch system

    Science.gov (United States)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  8. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  9. Scaling laws for trace impurity confinement: a variational approach

    International Nuclear Information System (INIS)

    Thyagaraja, A.; Haas, F.A.

    1990-01-01

    A variational approach is outlined for the deduction of impurity confinement scaling laws. Given the forms of the diffusive and convective components to the impurity particle flux, we present a variational principle for the impurity confinement time in terms of the diffusion time scale and the convection parameter, which is a non-dimensional measure of the size of the convective flux relative to the diffusive flux. These results are very general and apply irrespective of whether the transport fluxes are of theoretical or empirical origin. The impurity confinement time scales exponentially with the convection parameter in cases of practical interest. (orig.)

  10. Scale symmetry and virial theorem

    International Nuclear Information System (INIS)

    Westenholz, C. von

    1978-01-01

    Scale symmetry (or dilatation invariance) is discussed in terms of Noether's Theorem expressed in terms of a symmetry group action on phase space endowed with a symplectic structure. The conventional conceptual approach expressing invariance of some Hamiltonian under scale transformations is re-expressed in alternate form by infinitesimal automorphisms of the given symplectic structure. That is, the vector field representing scale transformations leaves the symplectic structure invariant. In this model, the conserved quantity or constant of motion related to scale symmetry is the virial. It is shown that the conventional virial theorem can be derived within this framework

  11. A phase space approach to wave propagation with dispersion.

    Science.gov (United States)

    Ben-Benjamin, Jonathan S; Cohen, Leon; Loughlin, Patrick J

    2015-08-01

    A phase space approximation method for linear dispersive wave propagation with arbitrary initial conditions is developed. The results expand on a previous approximation in terms of the Wigner distribution of a single mode. In contrast to this previously considered single-mode case, the approximation presented here is for the full wave and is obtained by a different approach. This solution requires one to obtain (i) the initial modal functions from the given initial wave, and (ii) the initial cross-Wigner distribution between different modal functions. The full wave is the sum of modal functions. The approximation is obtained for general linear wave equations by transforming the equations to phase space, and then solving in the new domain. It is shown that each modal function of the wave satisfies a Schrödinger-type equation where the equivalent "Hamiltonian" operator is the dispersion relation corresponding to the mode and where the wavenumber is replaced by the wavenumber operator. Application to the beam equation is considered to illustrate the approach.

  12. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  13. A phase-space approach to atmospheric dynamics based on observational data. Theory and applications

    International Nuclear Information System (INIS)

    Wang Risheng.

    1994-01-01

    This thesis is an attempt to develop systematically a phase-space approach to the atmospheric dynamics based on the theoretical achievement and application experiences in nonlinear time-series analysis. In particular, it is concerned with the derivation of quantities for describing the geometrical structure of the observed dynamics in phase-space (dimension estimation) and the examination of the observed atmospheric fluctuations in the light of phase-space representation. The thesis is, therefore composed of three major parts, i.e. an general survey of the theory of statistical approaches to dynamic systems, the methodology designed for the present study and specific applications with respect to dimension estimation and to a phase-space analysis of the tropical stratospheric quasi-biennial oscillation. (orig./KW)

  14. A simple coordinate space approach to three-body problems ...

    Indian Academy of Sciences (India)

    We show how to treat the dynamics of an asymmetric three-body system consisting of one heavy and two identical light particles in a simple coordinate space variational approach. The method is constructive and gives an efficient way of resolving a three-body system to an effective two-body system. It is illustrated by ...

  15. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  16. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Directory of Open Access Journals (Sweden)

    Alex N Tidd

    Full Text Available The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential

  17. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Science.gov (United States)

    Tidd, Alex N; Vermard, Youen; Marchal, Paul; Pinnegar, John; Blanchard, Julia L; Milner-Gulland, E J

    2015-01-01

    The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential foundation for future

  18. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  19. The use of an active learning approach in a SCALE-UP learning space improves academic performance in undergraduate General Biology.

    Science.gov (United States)

    Hacisalihoglu, Gokhan; Stephens, Desmond; Johnson, Lewis; Edington, Maurice

    2018-01-01

    Active learning is a pedagogical approach that involves students engaging in collaborative learning, which enables them to take more responsibility for their learning and improve their critical thinking skills. While prior research examined student performance at majority universities, this study focuses on specifically Historically Black Colleges and Universities (HBCUs) for the first time. Here we present work that focuses on the impact of active learning interventions at Florida A&M University, where we measured the impact of active learning strategies coupled with a SCALE-UP (Student Centered Active Learning Environment with Upside-down Pedagogies) learning environment on student success in General Biology. In biology sections where active learning techniques were employed, students watched online videos and completed specific activities before class covering information previously presented in a traditional lecture format. In-class activities were then carefully planned to reinforce critical concepts and enhance critical thinking skills through active learning techniques such as the one-minute paper, think-pair-share, and the utilization of clickers. Students in the active learning and control groups covered the same topics, took the same summative examinations and completed identical homework sets. In addition, the same instructor taught all of the sections included in this study. Testing demonstrated that these interventions increased learning gains by as much as 16%, and students reported an increase in their positive perceptions of active learning and biology. Overall, our results suggest that active learning approaches coupled with the SCALE-UP environment may provide an added opportunity for student success when compared with the standard modes of instruction in General Biology.

  20. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  1. A micromechanical approach of suffusion based on a length scale analysis of the grain detachment and grain transport processes.

    Science.gov (United States)

    Wautier, Antoine; Bonelli, Stéphane; Nicot, François

    2017-06-01

    Suffusion is the selective erosion of the finest particles of a soil subjected to an internal flow. Among the four types of internal erosion and piping identified today, suffusion is the least understood. Indeed, there is a lack of micromechanical approaches for identifying the critical microstructural parameters responsible for this process. Based on a discrete element modeling of non cohesive granular assemblies, specific micromechanical tools are developed in a unified framework to account for the two first steps of suffusion, namely the grain detachment and the grain transport processes. Thanks to the use of an enhanced force chain definition and autocorrelation functions the typical lengths scales associated with grain detachment are characterized. From the definition of transport paths based on a graph description of the pore space the typical lengths scales associated with grain transport are recovered. For a uniform grain size distribution, a separation of scales between these two processes exists for the finest particles of a soil

  2. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  3. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    Science.gov (United States)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space

  4. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    We calculate the redshift space correlation function and the power spectrum of density peaks of a Gaussian random field. Our derivation, which is valid on linear scales k≲0.1hMpc-1, is based on the peak biasing relation given by Desjacques [Phys. Rev. DPRVDAQ1550-7998, 78, 103503 (2008)10.1103/PhysRevD.78.103503]. In linear theory, the redshift space power spectrum is Ppks(k,μ)=exp⁡(-f2σvel2k2μ2)[bpk(k)+bvel(k)fμ2]2Pδ(k), where μ is the angle with respect to the line of sight, σvel is the one-dimensional velocity dispersion, f is the growth rate, and bpk(k) and bvel(k) are k-dependent linear spatial and velocity bias factors. For peaks, the value of σvel depends upon the functional form of bvel. When the k dependence is absent from the square brackets and bvel is set to unity, the resulting expression is assumed to describe models where the bias is linear and deterministic, but the velocities are unbiased. The peak model is remarkable because it has unbiased velocities in this same sense—peak motions are driven by dark matter flows—but, in order to achieve this, bvel must be k dependent. We speculate that this is true in general: k dependence of the spatial bias will lead to k dependence of bvel even if the biased tracers flow with the dark matter. Because of the k dependence of the linear bias parameters, standard manipulations applied to the peak model will lead to k-dependent estimates of the growth factor that could erroneously be interpreted as a signature of modified dark energy or gravity. We use the Fisher formalism to show that the constraint on the growth rate f is degraded by a factor of 2 if one allows for a k-dependent velocity bias of the peak type. Our analysis also demonstrates that the Gaussian smoothing term is part and parcel of linear theory. We discuss a simple estimate of nonlinear evolution and illustrate the effect of the peak bias on the redshift space multipoles. For k≲0.1hMpc-1, the peak bias is deterministic but k

  5. Multi-Scale Initial Conditions For Cosmological Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Oliver; /KIPAC, Menlo Park; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg

    2011-11-04

    We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

  6. A state space approach for the eigenvalue problem of marine risers

    KAUST Repository

    Alfosail, Feras; Nayfeh, Ali H.; Younis, Mohammad I.

    2017-01-01

    A numerical state-space approach is proposed to examine the natural frequencies and critical buckling limits of marine risers. A large axial tension in the riser model causes numerical limitations. These limitations are overcome by using

  7. New Li-Yau-Hamilton Inequalities for the Ricci Flow via the Space-Time Approach

    OpenAIRE

    Chow, Bennett; Knopf, Dan

    2002-01-01

    We generalize Hamilton's matrix Li-Yau-type Harnack estimate for the Ricci flow by considering the space of all LYH (Li-Yau-Hamilton) quadratics that arise as curvature tensors of space-time connections satisfying the Ricci flow with respect to the natural space-time degenerate metric. As a special case, we employ scaling arguments to derive a linear-type matrix LYH estimate. The new LYH quadratics obtained in this way are associated to the system of the Ricci flow coupled to a 1-form and a 2...

  8. Space, Scale and Languages: Identity Construction of Cross-Boundary Students in a Multilingual University in Hong Kong

    Science.gov (United States)

    Gu, Mingyue Michelle; Tong, Ho Kin

    2012-01-01

    Drawing on the notions of scale and space, this paper investigates identity construction among a group of mainland Chinese cross-boundary students by analysing their language choices and linguistic practices in a multilingual university in Hong Kong. The research illustrates how movement across spaces by these students produces varying index…

  9. Comparison of two Minkowski-space approaches to heavy quarkonia

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, Sofia; Biernat, Elmar P. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Li, Yang [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); College of William and Mary, Department of Physics, Williamsburg, VA (United States); Maris, Pieter; Vary, James P. [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); Pena, M.T. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Lisboa, Departamento de Fisica, Instituto Superior Tecnico, Lisbon (Portugal); Stadler, Alfred [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Evora, Departamento de Fisica, Evora (Portugal)

    2017-10-15

    In this work we compare mass spectra and decay constants obtained from two recent, independent, and fully relativistic approaches to the quarkonium bound-state problem: the Basis Light-Front Quantization approach, where light-front wave functions are naturally formulated; and, the Covariant Spectator Theory (CST), based on a reorganization of the Bethe-Salpeter equation. Even though conceptually different, both solutions are obtained in Minkowski space. Comparisons of decay constants for more than ten states of charmonium and bottomonium show favorable agreement between the two approaches as well as with experiment where available. We also apply the Brodsky-Huang-Lepage prescription to convert the CST amplitudes into functions of light-front variables. This provides an ideal opportunity to investigate the similarities and differences at the level of the wave functions. Several qualitative features are observed in remarkable agreement between the two approaches even for the rarely addressed excited states. Leading-twist distribution amplitudes as well as parton distribution functions of heavy quarkonia are also analyzed. (orig.)

  10. Implementing CDIO Approach in preparing engineers for Space Industry

    Directory of Open Access Journals (Sweden)

    Daneykin Yury

    2017-01-01

    Full Text Available The necessity to train highly qualified specialists leads to the development of the trajectory that can allow training specialists for the space industry. Several steps have been undertaken to reach this purpose. First, the University founded the Space Instrument Design Center that promotes a wide range of initiatives in the sphere of educating specialists, retraining specialists, carrying out research and collaborating with profiled enterprises. The University introduced Elite Engineering Education system to attract talented specialist and help them to follow individual trajectory to train unique specialist. The paper discusses the targets necessary for achievement to train specialists. Moreover, the paper presents the compliance of the attempts with the CDIO Approach, which is widely used in leading universities to improve engineering programs.

  11. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  12. Transport regimes spanning magnetization-coupling phase space

    Science.gov (United States)

    Baalrud, Scott D.; Daligault, Jérôme

    2017-10-01

    The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.

  13. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    Science.gov (United States)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  14. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  15. Simulation of the space debris environment in LEO using a simplified approach

    Science.gov (United States)

    Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico

    2017-01-01

    Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC

  16. Analytical Tools for Space Suit Design

    Science.gov (United States)

    Aitchison, Lindsay

    2011-01-01

    As indicated by the implementation of multiple small project teams within the agency, NASA is adopting a lean approach to hardware development that emphasizes quick product realization and rapid response to shifting program and agency goals. Over the past two decades, space suit design has been evolutionary in approach with emphasis on building prototypes then testing with the largest practical range of subjects possible. The results of these efforts show continuous improvement but make scaled design and performance predictions almost impossible with limited budgets and little time. Thus, in an effort to start changing the way NASA approaches space suit design and analysis, the Advanced Space Suit group has initiated the development of an integrated design and analysis tool. It is a multi-year-if not decadal-development effort that, when fully implemented, is envisioned to generate analysis of any given space suit architecture or, conversely, predictions of ideal space suit architectures given specific mission parameters. The master tool will exchange information to and from a set of five sub-tool groups in order to generate the desired output. The basic functions of each sub-tool group, the initial relationships between the sub-tools, and a comparison to state of the art software and tools are discussed.

  17. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  18. Gravity on a little warped space

    International Nuclear Information System (INIS)

    George, Damien P.; McDonald, Kristian L.

    2011-01-01

    We investigate the consistent inclusion of 4D Einstein gravity on a truncated slice of AdS 5 whose bulk-gravity and UV scales are much less than the 4D Planck scale, M * Pl . Such 'Little Warped Spaces' have found phenomenological utility and can be motivated by string realizations of the Randall-Sundrum framework. Using the interval approach to brane-world gravity, we show that the inclusion of a large UV-localized Einstein-Hilbert term allows one to consistently incorporate 4D Einstein gravity into the low-energy theory. We detail the spectrum of Kaluza-Klein metric fluctuations and, in particular, examine the coupling of the little radion to matter. Furthermore, we show that Goldberger-Wise stabilization can be successfully implemented on such spaces. Our results demonstrate that realistic low-energy effective theories can be constructed on these spaces, and have relevance for existing models in the literature.

  19. A new approach to designing reduced scale thermal-hydraulic experiments

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Sampaio, Paulo A.B. de; Pereira, Claudio M.N.A.

    2004-01-01

    Reduced scale experiments are often employed in engineering because they are much cheaper than real scale testing. Unfortunately, though, it is difficult to design a thermal-hydraulic circuit or equipment in reduced scale capable of reproducing, both accurately and simultaneously, all the physical phenomena that occur in real scale and operating conditions. This paper presents a methodology to designing thermal-hydraulic experiments in reduced scale based on setting up a constrained optimization problem that is solved using genetic algorithms (GAs). In order to demonstrate the application of the methodology proposed, we performed some investigations in the design of a heater aimed to simulate the transport of heat and momentum in the core of a pressurized water reactor (PWR) at 100% of nominal power and non-accident operating conditions. The results obtained show that the proposed methodology is a promising approach for designing reduced scale experiments

  20. The anomalous scaling exponents of turbulence in general dimension from random geometry

    Energy Technology Data Exchange (ETDEWEB)

    Eling, Christopher [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Oz, Yaron [Raymond and Beverly Sackler School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)

    2015-09-22

    We propose an analytical formula for the anomalous scaling exponents of inertial range structure functions in incompressible fluid turbulence. The formula is a Knizhnik-Polyakov-Zamolodchikov (KPZ)-type relation and is valid in any number of space dimensions. It incorporates intermittency in a novel way by dressing the Kolmogorov linear scaling via a coupling to a lognormal random geometry. The formula has one real parameter γ that depends on the number of space dimensions. The scaling exponents satisfy the convexity inequality, and the supersonic bound constraint. They agree with the experimental and numerical data in two and three space dimensions, and with numerical data in four space dimensions. Intermittency increases with γ, and in the infinite γ limit the scaling exponents approach the value one, as in Burgers turbulence. At large n the nth order exponent scales as √n. We discuss the relation between fluid flows and black hole geometry that inspired our proposal.

  1. Direct convertor based upon space charge effects

    International Nuclear Information System (INIS)

    Gitomer, S.J.

    1977-01-01

    A device capable of converting directly the kinetic energy of charged particles into electrical energy is considered. The device differs from earlier ones (such as Post's periodic focus electrostatic direct convertor) in that it makes use of the space charge repulsion in a high density charged particle beam. The beam is directed into a monotonic decelerating electrostatic field of a several-stage planar-finned structure. The collector fins coincide with vacuum equipotential surfaces. Space charge blowup of the beam directs particles onto various collector fins. The energy efficiency of a 4-stage device has been determined using a numberical simulation approach. We find that efficiencies approaching 75 percent are possible. An approximate scaling law is derived for the space charge based direct converter and a comparison is made to the periodic focus direct convertor. We find the space charge based direct convertor to be superior to a number of ways

  2. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  3. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    Science.gov (United States)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  4. Modelling an industrial anaerobic granular reactor using a multi-scale approach.

    Science.gov (United States)

    Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V

    2017-12-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All

  5. Space Elevators Preliminary Architectural View

    Science.gov (United States)

    Pullum, L.; Swan, P. A.

    Space Systems Architecture has been expanded into a process by the US Department of Defense for their large scale systems of systems development programs. This paper uses the steps in the process to establishes a framework for Space Elevator systems to be developed and provides a methodology to manage complexity. This new approach to developing a family of systems is based upon three architectural views: Operational View OV), Systems View (SV), and Technical Standards View (TV). The top level view of the process establishes the stages for the development of the first Space Elevator and is called Architectural View - 1, Overview and Summary. This paper will show the guidelines and steps of the process while focusing upon components of the Space Elevator Preliminary Architecture View. This Preliminary Architecture View is presented as a draft starting point for the Space Elevator Project.

  6. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  7. A Mellin space approach to the conformal bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Gopakumar, Rajesh [International Centre for Theoretical Sciences (ICTS-TIFR),Survey No. 151, Shivakote, Hesaraghatta Hobli, Bangalore North 560 089 (India); Kaviraj, Apratim [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India); Sen, Kallol [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India); Kavli Institute for the Physics and Mathematics of the Universe (WPI),The University of Tokyo Institutes for Advanced Study, Kashiwa, Chiba 277-8583 (Japan); Sinha, Aninda [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India)

    2017-05-05

    We describe in more detail our approach to the conformal bootstrap which uses the Mellin representation of CFT{sub d} four point functions and expands them in terms of crossing symmetric combinations of AdS{sub d+1} Witten exchange functions. We consider arbitrary external scalar operators and set up the conditions for consistency with the operator product expansion. Namely, we demand cancellation of spurious powers (of the cross ratios, in position space) which translate into spurious poles in Mellin space. We discuss two contexts in which we can immediately apply this method by imposing the simplest set of constraint equations. The first is the epsilon expansion. We mostly focus on the Wilson-Fisher fixed point as studied in an epsilon expansion about d=4. We reproduce Feynman diagram results for operator dimensions to O(ϵ{sup 3}) rather straightforwardly. This approach also yields new analytic predictions for OPE coefficients to the same order which fit nicely with recent numerical estimates for the Ising model (at ϵ=1). We will also mention some leading order results for scalar theories near three and six dimensions. The second context is a large spin expansion, in any dimension, where we are able to reproduce and go a bit beyond some of the results recently obtained using the (double) light cone expansion. We also have a preliminary discussion about numerical implementation of the above bootstrap scheme in the absence of a small parameter.

  8. Conceptual design of jewellery: a space-based aesthetics approach

    Directory of Open Access Journals (Sweden)

    Tzintzi Vaia

    2017-01-01

    Full Text Available Conceptual design is a field that offers various aesthetic approaches to generation of nature-based product design concepts. Essentially, Conceptual Product Design (CPD uses similarities based on the geometrical forms and functionalities. Furthermore, the CAD-based freehand sketch is a primary conceptual tool in the early stages of the design process. The proposed Conceptual Product Design concept is dealing with jewelleries that are inspired from space. Specifically, a number of galaxy features, such as galaxy shapes, wormholes and graphical representation of planet magnetic field are used as inspirations. Those space-based design ideas at a conceptual level can lead to further opportunities for research and economic success of the jewellery industry. A number of illustrative case studies are presented and new opportunities can be derived for economic success.

  9. Approaching control for tethered space robot based on disturbance observer using super twisting law

    Science.gov (United States)

    Hu, Yongxin; Huang, Panfeng; Meng, Zhongjie; Wang, Dongke; Lu, Yingbo

    2018-05-01

    Approaching control is a key mission for the tethered space robot to perform the task of removing space debris. But the uncertainties of the TSR such as the change of model parameter have an important effect on the approaching mission. Considering the space tether and the attitude of the gripper, the dynamic model of the TSR is derived using Lagrange method. Then a disturbance observer is designed to estimate the uncertainty based on STW control method. Using the disturbance observer, a controller is designed, and the performance is compared with the dynamic inverse controller which turns out that the proposed controller performs better. Numerical simulation validates the feasibility of the proposed controller on the position and attitude tracking of the TSR.

  10. Collaborative Approaches in Developing Environmental and Safety Management Systems for Commercial Space Transportation

    Science.gov (United States)

    Zee, Stacey; Murray, D.

    2009-01-01

    The Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST) licenses and permits U.S. commercial space launch and reentry activities, and licenses the operation of non-federal launch and reentry sites. ASTs mission is to ensure the protection of the public, property, and the national security and foreign policy interests of the United States during commercial space transportation activities and to encourage, facilitate, and promote U.S. commercial space transportation. AST faces unique challenges of ensuring the protection of public health and safety while facilitating and promoting U.S. commercial space transportation. AST has developed an Environmental Management System (EMS) and a Safety Management System (SMS) to help meet its mission. Although the EMS and SMS were developed independently, the systems share similar elements. Both systems follow a Plan-Do-Act-Check model in identifying potential environmental aspects or public safety hazards, assessing significance in terms of severity and likelihood of occurrence, developing approaches to reduce risk, and verifying that the risk is reduced. This paper will describe the similarities between ASTs EMS and SMS elements and how AST is building a collaborative approach in environmental and safety management to reduce impacts to the environment and risks to the public.

  11. Hybrid x-space: a new approach for MPI reconstruction.

    Science.gov (United States)

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  12. Scientific, statistical, practical, and regulatory considerations in design space development.

    Science.gov (United States)

    Debevec, Veronika; Srčič, Stanko; Horvat, Matej

    2018-03-01

    The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach.

  13. Approaching a universal scaling relationship between fracture stiffness and fluid flow

    Science.gov (United States)

    Pyrak-Nolte, Laura J.; Nolte, David D.

    2016-02-01

    A goal of subsurface geophysical monitoring is the detection and characterization of fracture alterations that affect the hydraulic integrity of a site. Achievement of this goal requires a link between the mechanical and hydraulic properties of a fracture. Here we present a scaling relationship between fluid flow and fracture-specific stiffness that approaches universality. Fracture-specific stiffness is a mechanical property dependent on fracture geometry that can be monitored remotely using seismic techniques. A Monte Carlo numerical approach demonstrates that a scaling relationship exists between flow and stiffness for fractures with strongly correlated aperture distributions, and continues to hold for fractures deformed by applied stress and by chemical erosion as well. This new scaling relationship provides a foundation for simulating changes in fracture behaviour as a function of stress or depth in the Earth and will aid risk assessment of the hydraulic integrity of subsurface sites.

  14. Scale-by-scale contributions to Lagrangian particle acceleration

    Science.gov (United States)

    Lalescu, Cristian C.; Wilczek, Michael

    2017-11-01

    Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.

  15. NASTRAN analysis of the 1/8-scale space shuttle dynamic model

    Science.gov (United States)

    Bernstein, M.; Mason, P. W.; Zalesak, J.; Gregory, D. J.; Levy, A.

    1973-01-01

    The space shuttle configuration has more complex structural dynamic characteristics than previous launch vehicles primarily because of the high model density at low frequencies and the high degree of coupling between the lateral and longitudinal motions. An accurate analytical representation of these characteristics is a primary means for treating structural dynamics problems during the design phase of the shuttle program. The 1/8-scale model program was developed to explore the adequacy of available analytical modeling technology and to provide the means for investigating problems which are more readily treated experimentally. The basic objectives of the 1/8-scale model program are: (1) to provide early verification of analytical modeling procedures on a shuttle-like structure, (2) to demonstrate important vehicle dynamic characteristics of a typical shuttle design, (3) to disclose any previously unanticipated structural dynamic characteristics, and (4) to provide for development and demonstration of cost effective prototype testing procedures.

  16. Dynamic simulation of a pilot scale vacuum gas oil hydrocracking unit by the space-time CE/SE method

    Energy Technology Data Exchange (ETDEWEB)

    Sadighi, S.; Ahmad, A. [Institute of Hydrogen Economy, Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Shirvani, M. [Faculty of Chemical Engineering, University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2012-05-15

    This work introduces a modified space-time conservation element/solution element (CE/SE) method for the simulation of the dynamic behavior of a pilot-scale hydrocracking reactor. With this approach, a four-lump dynamic model including vacuum gas oil (VGO), middle distillate, naphtha and gas is solved. The proposed method is capable of handling the stiffness of the partial differential equations resulting from the hydrocracking reactions. To have a better judgment, the model is also solved by the finite difference method (FDM), and the results from both approaches are compared. Initially, the absolute average deviation of the cold dynamic simulation using the CE/SE approach is 8.98 %, which is better than that obtained using the FDM. Then, the stability analysis proves that for achieving an appropriate response from the dynamic model, the Courant number, which is a function of the time step size, mesh size and volume flow rate through the catalytic bed, should be less than 1. Finally, it is found that, following a careful selection of these parameters, the CE/SE solutions to the hydrocracking model can produce higher accuracy than the FDM results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Science.gov (United States)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  18. The Faster, Better, Cheaper Approach to Space Missions: An Engineering Management Assessment

    Science.gov (United States)

    Hamaker, Joe

    2000-01-01

    This paper describes, in viewgraph form, the faster, better, cheaper approach to space missions. The topics include: 1) What drives "Faster, Better, Cheaper"? 2) Why Space Programs are Costly; 3) Background; 4) Aerospace Project Management (Old Culture); 5) Aerospace Project Management (New Culture); 6) Scope of Analysis Limited to Engineering Management Culture; 7) Qualitative Analysis; 8) Some Basic Principles of the New Culture; 9) Cause and Effect; 10) "New Ways of Doing Business" Survey Results; 11) Quantitative Analysis; 12) Recent Space System Cost Trends; 13) Spacecraft Dry Weight Trend; 14) Complexity Factor Trends; 15) Cost Normalization; 16) Cost Normalization Algorithm; 17) Unnormalized Cost vs. Normalized Cost; and 18) Concluding Observations.

  19. Wavelet Space-Scale-Decomposition Analysis of QSO's Ly$\\alpha$ Absorption Lines: Spectrum of Density Perturbations

    OpenAIRE

    Pando, Jesus; Fang, Li-Zhi

    1995-01-01

    A method for measuring the spectrum of a density field by a discrete wavelet space-scale decomposition (SSD) has been studied. We show how the power spectrum can effectively be described by the father function coefficients (FFC) of the wavelet SSD. We demonstrate that the features of the spectrum, such as the magnitude, the index of a power law, and the typical scales, can be determined with high precision by the FFC reconstructed spectrum. This method does not require the mean density, which...

  20. Real-space local polynomial basis for solid-state electronic-structure calculations: A finite-element approach

    International Nuclear Information System (INIS)

    Pask, J.E.; Klein, B.M.; Fong, C.Y.; Sterne, P.A.

    1999-01-01

    We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate ab initio calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. copyright 1999 The American Physical Society

  1. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  2. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  3. Space and place concepts analysis based on semiology approach in residential architecture

    Directory of Open Access Journals (Sweden)

    Mojtaba Parsaee

    2015-12-01

    Full Text Available Space and place are among the fundamental concepts in architecture about which many discussions have been held and the complexity and importance of these concepts were focused on. This research has introduced an approach to better cognition of the architectural concepts based on theory and method of semiology in linguistics. Hence, at first the research investigates the concepts of space and place and explains their characteristics in architecture. Then, it reviews the semiology theory and explores its concepts and ideas. After obtaining the principles of theory and also the method of semiology, they are redefined in an architectural system based on an adaptive method. Finally, the research offers a conceptual model which is called the semiology approach by considering the architectural system as a system of signs. The approach can be used to decode the content of meanings and forms and analyses of the architectural mechanism in order to obtain its meanings and concepts. In this way and based on this approach, the residential architecture of the traditional city of Bushehr – Iran was analyzed as a case of study and its concepts were extracted. The results of this research demonstrate the effectiveness of this approach in structure detection and identification of an architectural system. Besides, this approach has the capability to be used in processes of sustainable development and also be a basis for deconstruction of architectural texts. The research methods of this study are qualitative based on comparative and descriptive analyses.

  4. Scale-model Experiment of Magnetoplasma Sail for Future Deep Space Missions

    International Nuclear Information System (INIS)

    Funaki, Ikkoh; Yamakawa, Hiroshi; Ueno, Kazuma; Kimura, Toshiyuki; Ayabe, Tomohiro; Horisawa, Hideyuki

    2008-01-01

    When Magnetic sail (MagSail) spacecraft is operated in space, the supersonic solar wind plasma flow is blocked by an artificially produced magnetic cavity to accelerate the spacecraft in the direction leaving the Sun. To evaluate the momentum transferring process from the solar wind to the coil onboard the MagSail spacecraft, we arranged a laboratory experiment of MagSail spacecraft. Based on scaling considerations, a solenoidal coil was immersed into the plasma flow from a magnetoplasmadynamic arcjet in a quasi-steady mode of about 1 ms duration. In this setup, it is confirmed that a magnetic cavity, which is similar to that of the geomagnetic field, was formed around the coil to produce thrust in the ion Larmor scale interaction. Also, the controllability of magnetic cavity size by a plasma jet from inside the coil of MagSail is demonstrated, although the thrust characteristic of the MagSail with plasma jet, which is so called plasma sail, is to be clarified in our next step

  5. Gowdy phenomenology in scale-invariant variables

    International Nuclear Information System (INIS)

    Andersson, Lars; Elst, Henk van; Uggla, Claes

    2004-01-01

    The dynamics of Gowdy vacuum spacetimes is considered in terms of Hubble-normalized scale-invariant variables, using the timelike area temporal gauge. The resulting state space formulation provides for a simple mechanism for the formation of 'false' and 'true spikes' in the approach to the singularity, and a geometrical formulation for the local attractor

  6. Contaminant ingress into multizone buildings: An analytical state-space approach

    KAUST Repository

    Parker, Simon

    2013-08-13

    The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  7. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    Science.gov (United States)

    Li, Dengwang; Li, Hongsheng; Wan, Honglin; Chen, Jinhu; Gong, Guanzhong; Wang, Hongjun; Wang, Liming; Yin, Yong

    2012-08-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5-8% for mono-modality and 10-14% for multi-modality registration under the same condition. Furthermore, clinical application by adaptive

  8. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    International Nuclear Information System (INIS)

    Li Dengwang; Wan Honglin; Li Hongsheng; Chen Jinhu; Gong Guanzhong; Yin Yong; Wang Hongjun; Wang Liming

    2012-01-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5–8% for mono-modality and 10–14% for multi-modality registration under the same condition. Furthermore, clinical application by

  9. On Yang's Noncommutative Space Time Algebra, Holography, Area Quantization and C-space Relativity

    CERN Document Server

    Castro, C

    2004-01-01

    An isomorphism between Yang's Noncommutative space-time algebra (involving two length scales) and the holographic-area-coordinates algebra of C-spaces (Clifford spaces) is constructed via an AdS_5 space-time which is instrumental in explaining the origins of an extra (infrared) scale R in conjunction to the (ultraviolet) Planck scale lambda characteristic of C-spaces. Yang's space-time algebra allowed Tanaka to explain the origins behind the discrete nature of the spectrum for the spatial coordinates and spatial momenta which yields a minimum length-scale lambda (ultraviolet cutoff) and a minimum momentum p = (\\hbar / R) (maximal length R, infrared cutoff). The double-scaling limit of Yang's algebra : lambda goes to 0, and R goes to infinity, in conjunction with the large n infinity limit, leads naturally to the area quantization condition : lambda R = L^2 = n lambda^2 (in Planck area units) given in terms of the discrete angular-momentum eigenvalues n . The generalized Weyl-Heisenberg algebra in C-spaces is ...

  10. Infinite-mode squeezed coherent states and non-equilibrium statistical mechanics (phase-space-picture approach)

    International Nuclear Information System (INIS)

    Yeh, L.

    1992-01-01

    The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite- mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena

  11. State space model extraction of thermohydraulic systems – Part I: A linear graph approach

    International Nuclear Information System (INIS)

    Uren, K.R.; Schoor, G. van

    2013-01-01

    Thermohydraulic simulation codes are increasingly making use of graphical design interfaces. The user can quickly and easily design a thermohydraulic system by placing symbols on the screen resembling system components. These components can then be connected to form a system representation. Such system models may then be used to obtain detailed simulations of the physical system. Usually this kind of simulation models are too complex and not ideal for control system design. Therefore, a need exists for automated techniques to extract lumped parameter models useful for control system design. The goal of this first paper, in a two part series, is to propose a method that utilises a graphical representation of a thermohydraulic system, and a lumped parameter modelling approach, to extract state space models. In this methodology each physical domain of the thermohydraulic system is represented by a linear graph. These linear graphs capture the interaction between all components within and across energy domains – hydraulic, thermal and mechanical. These linear graphs are analysed using a graph-theoretic approach to derive reduced order state space models. These models capture the dominant dynamics of the thermohydraulic system and are ideal for control system design purposes. The proposed state space model extraction method is demonstrated by considering a U-tube system. A non-linear state space model is extracted representing both the hydraulic and thermal domain dynamics of the system. The simulated state space model is compared with a Flownex ® model of the U-tube. Flownex ® is a validated systems thermal-fluid simulation software package. - Highlights: • A state space model extraction methodology based on graph-theoretic concepts. • An energy-based approach to consider multi-domain systems in a common framework. • Allow extraction of transparent (white-box) state space models automatically. • Reduced order models containing only independent state

  12. Overview of Small and Large-Scale Space Solar Power Concepts

    Science.gov (United States)

    Potter, Seth; Henley, Mark; Howell, Joe; Carrington, Connie; Fikes, John

    2006-01-01

    poles to search for water ice and other frozen volatiles. Near such craters are mountain peaks and highlands that are in near permanent sunlight. Power can be beamed from a collector on a sunlit mountain or crater rim to a rover inside a crater. Near-term applications of space solar power technology can therefore pave the way toward large-scale commercial power from space.

  13. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    Science.gov (United States)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  14. Coupled radiative gasdynamic interaction and non-equilibrium dissociation for large-scale returned space vehicles

    International Nuclear Information System (INIS)

    Surzhikov, S.

    2012-01-01

    Graphical abstract: It has been shown that different coupled vibrational dissociation models, being applied for solving coupled radiative gasdynamic problems for large size space vehicles, exert noticeable effect on radiative heating of its surface at orbital entry on high altitudes (h ⩾ 70 km). This influence decreases with decreasing the space vehicles sizes. Figure shows translational (solid lines) and vibrational (dashed lines) temperatures in shock layer with (circle markers) and without (triangles markers) radiative-gasdynamic interaction for one trajectory point of entering space vehicle. Highlights: ► Nonequilibrium dissociation processes exert effect on radiation heating of space vehicles (SV). ► The radiation gas dynamic interaction enhances this influence. ► This influence increases with increasing the SV sizes. - Abstract: Radiative aerothermodynamics of large-scale space vehicles is considered for Earth orbital entry at zero angle of attack. Brief description of used radiative gasdynamic model of physically and chemically nonequilibrium, viscous, heat conductive and radiative gas of complex chemical composition is presented. Radiation gasdynamic (RadGD) interaction in high temperature shock layer is studied by means of numerical experiment. It is shown that radiation–gasdynamic coupling for orbital space vehicles of large size is important for high altitude part of entering trajectory. It is demonstrated that the use of different models of coupled vibrational dissociation (CVD) in conditions of RadGD interaction gives rise temperature variation in shock layer and, as a result, leads to significant variation of radiative heating of space vehicle.

  15. Time-dependent approach to collisional ionization using exterior complex scaling

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2002-01-01

    We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions

  16. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  17. Astronaut Ross Approaches Assembly Concept for Construction of Erectable Space Structure (ACCESS)

    Science.gov (United States)

    1999-01-01

    The crew assigned to the STS-61B mission included Bryan D. O'Conner, pilot; Brewster H. Shaw, commander; Charles D. Walker, payload specialist; mission specialists Jerry L. Ross, Mary L. Cleave, and Sherwood C. Spring; and Rodolpho Neri Vela, payload specialist. Launched aboard the Space Shuttle Atlantis November 28, 1985 at 7:29:00 pm (EST), the STS-61B mission's primary payload included three communications satellites: MORELOS-B (Mexico); AUSSAT-2 (Australia); and SATCOM KU-2 (RCA Americom). Two experiments were conducted to test assembling erectable structures in space: EASE (Experimental Assembly of Structures in Extravehicular Activity), and ACCESS (Assembly Concept for Construction of Erectable Space Structure). In a joint venture between NASA/Langley Research Center in Hampton, Virginia, and the Marshall Space Flight Center (MSFC), EASE and ACCESS were developed and demonstrated at MSFC's Neutral Buoyancy Simulator (NBS). In this STS-61B onboard photo, astronaut Ross, perched on the Manipulator Foot Restraint (MFR) approaches the erected ACCESS. The primary objective of these experiments was to test the structural assembly concepts for suitability as the framework for larger space structures and to identify ways to improve the productivity of space construction.

  18. A potential theory approach to an algorithm of conceptual space partitioning

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2017-12-01

    Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

  19. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  20. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  1. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  2. Application of a computationally efficient geostatistical approach to characterizing variably spaced water-table data

    International Nuclear Information System (INIS)

    Quinn, J.J.

    1996-01-01

    Geostatistical analysis of hydraulic head data is useful in producing unbiased contour plots of head estimates and relative errors. However, at most sites being characterized, monitoring wells are generally present at different densities, with clusters of wells in some areas and few wells elsewhere. The problem that arises when kriging data at different densities is in achieving adequate resolution of the grid while maintaining computational efficiency and working within software limitations. For the site considered, 113 data points were available over a 14-mi 2 study area, including 57 monitoring wells within an area of concern of 1.5 mi 2 . Variogram analyses of the data indicate a linear model with a negligible nugget effect. The geostatistical package used in the study allows a maximum grid of 100 by 100 cells. Two-dimensional kriging was performed for the entire study area with a 500-ft grid spacing, while the smaller zone was modeled separately with a 100-ft spacing. In this manner, grid cells for the dense area and the sparse area remained small relative to the well separation distances, and the maximum dimensions of the program were not exceeded. The spatial head results for the detailed zone were then nested into the regional output by use of a graphical, object-oriented database that performed the contouring of the geostatistical output. This study benefitted from the two-scale approach and from very fine geostatistical grid spacings relative to typical data separation distances. The combining of the sparse, regional results with those from the finer-resolution area of concern yielded contours that honored the actual data at every measurement location. The method applied in this study can also be used to generate reproducible, unbiased representations of other types of spatial data

  3. Biocultural approaches to well-being and sustainability indicators across scales

    Science.gov (United States)

    Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter

    2017-01-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...

  4. Parameter retrieval of chiral metamaterials based on the state-space approach.

    Science.gov (United States)

    Zarifi, Davoud; Soleimani, Mohammad; Abdolali, Ali

    2013-08-01

    This paper deals with the introduction of an approach for the electromagnetic characterization of homogeneous chiral layers. The proposed method is based on the state-space approach and properties of a 4×4 state transition matrix. Based on this, first, the forward problem analysis through the state-space method is reviewed and properties of the state transition matrix of a chiral layer are presented and proved as two theorems. The formulation of a proposed electromagnetic characterization method is then presented. In this method, scattering data for a linearly polarized plane wave incident normally on a homogeneous chiral slab are combined with properties of a state transition matrix and provide a powerful characterization method. The main difference with respect to other well-established retrieval procedures based on the use of the scattering parameters relies on the direct computation of the transfer matrix of the slab as opposed to the conventional calculation of the propagation constant and impedance of the modes supported by the medium. The proposed approach allows avoiding nonlinearity of the problem but requires getting enough equations to fulfill the task which was provided by considering some properties of the state transition matrix. To demonstrate the applicability and validity of the method, the constitutive parameters of two well-known dispersive chiral metamaterial structures at microwave frequencies are retrieved. The results show that the proposed method is robust and reliable.

  5. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    International Nuclear Information System (INIS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-01-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management. (paper)

  6. Symbols, spaces and materiality: a transmission-based approach to Aegean Bronze Age ritual.

    OpenAIRE

    Briault, C.

    2005-01-01

    This thesis explores the transmission of ritual practices in the second millennium BC Aegean. In contrast to previous approaches, which often overlook gaps in the diachronic record, emphasising continuity in cult practice over very long timescales, it is argued here that through charting the spatial and temporal distributions of three broad material types (cult symbols, spaces and objects), it is possible to document the spread of cult practice over time and space, and, crucially, to monitor ...

  7. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  8. A statistical approach for water movement in the unsaturated zone

    International Nuclear Information System (INIS)

    Tielin Zang.

    1991-01-01

    This thesis presents a statistical approach for estimating and analyzing the downward transport pattern and distribution of soil water by the use of pattern analysis of space-time correlation structures. This approach, called the Space-time-Correlation Field, is mainly based on the analyses of correlation functions simultaneously in the space and time domain. The overall purpose of this work is to derive an alternative statistical procedure in soil moisture analysis without involving detailed information on hydraulic parameters and to visualize the dynamics of soil water variability in the space and time domains. A numerical model using method of characteristics is employed to provide hypothetical time series to use in the statistical method, which is, after the verification and calibration, applied to the field measured time series. The results of the application show that the space-time correlation fields reveal effects of soil layers with different hydraulic properties and boundaries between them. It is concluded that the approach poses special advantages when visualizing time and space dependent properties simultaneously. It can be used to investigate the hydrological response of soil water dynamics and characteristics in different dimensions (space and time) and scales. This approach can be used to identify the dominant component in unsaturated flow systems. It is possible to estimate the pattern and the propagation rate downwards of moisture movement in the soil profile. Small-scale soil heterogeneities can be identified by the correlation field. Since the correlation field technique give a statistical measure of the dependent property that varies within the space-time field, it is possible to interpolate the fields to points where observations are not available, estimating spatial or temporal averages from discrete observations. (au)

  9. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    Science.gov (United States)

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  10. Feasibility analysis of large length-scale thermocapillary flow experiment for the International Space Station

    Science.gov (United States)

    Alberts, Samantha J.

    The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.

  11. Semiclassical moment of inertia shell-structure within the phase-space approach

    International Nuclear Information System (INIS)

    Gorpinchenko, D V; Magner, A G; Bartel, J; Blocki, J P

    2015-01-01

    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the cranking model and the Strutinsky shell-correction method by using the non-perturbative periodic-orbit theory in the phase-space variables. This moment of inertia for adiabatic (statistical-equilibrium) rotations can be approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. A semiclassical phase-space trace formula allows us to express the shell components of the moment of inertia quite accurately in terms of the free-energy shell corrections for integrable and partially chaotic Fermi systems, which is in good agreement with the corresponding quantum calculations. (paper)

  12. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  13. Accessibility of green space in urban areas: an examination of various approaches to measure it

    OpenAIRE

    Zhang, Xin

    2007-01-01

    In the present research, we attempt to improve the methods used for measuring accessibility of green spaces by combining two components of accessibility-distance and demand relative to supply. Three modified approaches (Joseph and Bantock gravity model measure, the two-step floating catchment area measure and a measure based on kernel densities) will be applied for measuring accessibility to green spaces. We select parks and public open spaces (metropolitan open land) of south London as a cas...

  14. Space nuclear reactor system diagnosis: Knowledge-based approach

    International Nuclear Information System (INIS)

    Ting, Y.T.D.

    1990-01-01

    SP-100 space nuclear reactor system development is a joint effort by the Department of Energy, the Department of Defense and the National Aeronautics and Space Administration. The system is designed to operate in isolation for many years, and is possibly subject to little or no remote maintenance. This dissertation proposes a knowledge based diagnostic system which, in principle, can diagnose the faults which can either cause reactor shutdown or lead to another serious problem. This framework in general can be applied to the fully specified system if detailed design information becomes available. The set of faults considered herein is identified based on heuristic knowledge about the system operation. The suitable approach to diagnostic problem solving is proposed after investigating the most prevalent methodologies in Artificial Intelligence as well as the causal analysis of the system. Deep causal knowledge modeling based on digraph, fault-tree or logic flowgraph methodology would present a need for some knowledge representation to handle the time dependent system behavior. A proposed qualitative temporal knowledge modeling methodology, using rules with specified time delay among the process variables, has been proposed and is used to develop the diagnostic sufficient rule set. The rule set has been modified by using a time zone approach to have a robust system design. The sufficient rule set is transformed to a sufficient and necessary one by searching the whole knowledge base. Qualitative data analysis is proposed in analyzing the measured data if in a real time situation. An expert system shell - Intelligence Compiler is used to develop the prototype system. Frames are used for the process variables. Forward chaining rules are used in monitoring and backward chaining rules are used in diagnosis

  15. El Naschie's ε (∞) space-time and scale relativity theory in the topological dimension D = 4

    International Nuclear Information System (INIS)

    Agop, M.; Murgulet, C.

    2007-01-01

    In the topological dimension D = 4 of the scale relativity theory, the self-structuring of a coherent quantum fluid implies the Golden mean renormalization group. Then, the transfinite set of El Naschie's ε (∞) space-time becomes the background of a new physics (the transfinite physics)

  16. The algebraic approach to space-time geometry

    International Nuclear Information System (INIS)

    Heller, M.; Multarzynski, P.; Sasin, W.

    1989-01-01

    A differential manifold can be defined in terms of smooth real functions carried by it. By rejecting the postulate, in such a definition, demanding the local diffeomorphism of a manifold to the Euclidean space, one obtains the so-called differential space concept. Every subset of R n turns out to be a differential space. Extensive parts of differential geometry on differential spaces, developed by Sikorski, are reviewed and adapted to relativistic purposes. Differential space as a new model of space-time is proposed. The Lorentz structure and Einstein's field equations on differential spaces are discussed. 20 refs. (author)

  17. Innovative Approaches to Space-Based Manufacturing and Rapid Prototyping of Composite Materials

    Science.gov (United States)

    Hill, Charles S.

    2012-01-01

    The ability to deploy large habitable structures, construct, and service exploration vehicles in low earth orbit will be an enabling capability for continued human exploration of the solar system. It is evident that advanced manufacturing methods to fabricate replacement parts and re-utilize launch vehicle structural mass by converting it to different uses will be necessary to minimize costs and allow flexibility to remote crews engaged in space travel. Recent conceptual developments and the combination of inter-related approaches to low-cost manufacturing of composite materials and structures are described in context leading to the possibility of on-orbit and space-based manufacturing.

  18. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    Science.gov (United States)

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  19. Review of NASA approach to space radiation risk assessments for Mars exploration.

    Science.gov (United States)

    Cucinotta, Francis A

    2015-02-01

    Long duration space missions present unique radiation protection challenges due to the complexity of the space radiation environment, which includes high charge and energy particles and other highly ionizing radiation such as neutrons. Based on a recommendation by the National Council on Radiation Protection and Measurements, a 3% lifetime risk of exposure-induced death for cancer has been used as a basis for risk limitation by the National Aeronautics and Space Administration (NASA) for low-Earth orbit missions. NASA has developed a risk-based approach to radiation exposure limits that accounts for individual factors (age, gender, and smoking history) and assesses the uncertainties in risk estimates. New radiation quality factors with associated probability distribution functions to represent the quality factor's uncertainty have been developed based on track structure models and recent radiobiology data for high charge and energy particles. The current radiation dose limits are reviewed for spaceflight and the various qualitative and quantitative uncertainties that impact the risk of exposure-induced death estimates using the NASA Space Cancer Risk (NSCR) model. NSCR estimates of the number of "safe days" in deep space to be within exposure limits and risk estimates for a Mars exploration mission are described.

  20. On the necessary conditions of the regular minimum of the scale factor of the co-moving space

    International Nuclear Information System (INIS)

    Agakov, V.G.

    1980-01-01

    In the framework of homogeneous cosmologic model studied is the behaviour of the comoving space element volume filled with barotropous medium, deprived of energy fluxes. Presented are the necessary conditions at which a regular final minimum of the scale factor of the co-mowing space may take place. It is found that to carry out the above minimum at values of cosmological constant Λ <= 0 the presence of two from three anisotropy factors is necessary. Anisotropy of space deformation should be one of these factors. In case of Λ <= 0 the regular minimum is also possible if all three factors of anisotropy are equal to zero. However if none of the factors of Fsub(i), Asub(ik) anisotropy is equal to zero, the presence of deformation space anisotropy is necessary for final regular minimum appearance

  1. Space-Hotel Early Bird - An Educational and Public Outreach Approach

    Science.gov (United States)

    Amekrane, R.; Holze, C.

    2002-01-01

    education and public outreach can be combined and how a cooperation among an association, the industry and academia can work successfully. Representatives of the DGLR and the academia developed a method to spread space related knowledge in a short time to a motivated working group. The project was a great success in the sense to involve other disciplines in space related topics by interdisciplinary work and in the sense of public and educational outreach. With more than 2.3 million contacts the DGLR e.V. promoted space and the vision of living (in) space to the public. The task of the paper is mainly to describe the approach and the experience made related to the organization, lectures, financing and outreach efforts in respect to similar future international outreach activities, which are planned for the 54th International Astronautical Congress in Bremen/Germany. www.spacehotel.org

  2. The method of rigged spaces in singular perturbation theory of self-adjoint operators

    CERN Document Server

    Koshmanenko, Volodymyr; Koshmanenko, Nataliia

    2016-01-01

    This monograph presents the newly developed method of rigged Hilbert spaces as a modern approach in singular perturbation theory. A key notion of this approach is the Lax-Berezansky triple of Hilbert spaces embedded one into another, which specifies the well-known Gelfand topological triple. All kinds of singular interactions described by potentials supported on small sets (like the Dirac δ-potentials, fractals, singular measures, high degree super-singular expressions) admit a rigorous treatment only in terms of the equipped spaces and their scales. The main idea of the method is to use singular perturbations to change inner products in the starting rigged space, and the construction of the perturbed operator by the Berezansky canonical isomorphism (which connects the positive and negative spaces from a new rigged triplet). The approach combines three powerful tools of functional analysis based on the Birman-Krein-Vishik theory of self-adjoint extensions of symmetric operators, the theory of singular quadra...

  3. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  4. Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background.

    Science.gov (United States)

    Luminet, Jean-Pierre; Weeks, Jeffrey R; Riazuelo, Alain; Lehoucq, Roland; Uzan, Jean-Philippe

    2003-10-09

    The current 'standard model' of cosmology posits an infinite flat universe forever expanding under the pressure of dark energy. First-year data from the Wilkinson Microwave Anisotropy Probe (WMAP) confirm this model to spectacular precision on all but the largest scales. Temperature correlations across the microwave sky match expectations on angular scales narrower than 60 degrees but, contrary to predictions, vanish on scales wider than 60 degrees. Several explanations have been proposed. One natural approach questions the underlying geometry of space--namely, its curvature and topology. In an infinite flat space, waves from the Big Bang would fill the universe on all length scales. The observed lack of temperature correlations on scales beyond 60 degrees means that the broadest waves are missing, perhaps because space itself is not big enough to support them. Here we present a simple geometrical model of a finite space--the Poincaré dodecahedral space--which accounts for WMAP's observations with no fine-tuning required. The predicted density is Omega(0) approximately 1.013 > 1, and the model also predicts temperature correlations in matching circles on the sky.

  5. Memory matters: influence from a cognitive map on animal space use.

    Science.gov (United States)

    Gautestad, Arild O

    2011-10-21

    A vertebrate individual's cognitive map provides a capacity for site fidelity and long-distance returns to favorable patches. Fractal-geometrical analysis of individual space use based on collection of telemetry fixes makes it possible to verify the influence of a cognitive map on the spatial scatter of habitat use and also to what extent space use has been of a scale-specific versus a scale-free kind. This approach rests on a statistical mechanical level of system abstraction, where micro-scale details of behavioral interactions are coarse-grained to macro-scale observables like the fractal dimension of space use. In this manner, the magnitude of the fractal dimension becomes a proxy variable for distinguishing between main classes of habitat exploration and site fidelity, like memory-less (Markovian) Brownian motion and Levy walk and memory-enhanced space use like Multi-scaled Random Walk (MRW). In this paper previous analyses are extended by exploring MRW simulations under three scenarios: (1) central place foraging, (2) behavioral adaptation to resource depletion (avoidance of latest visited locations) and (3) transition from MRW towards Levy walk by narrowing memory capacity to a trailing time window. A generalized statistical-mechanical theory with the power to model cognitive map influence on individual space use will be important for statistical analyses of animal habitat preferences and the mechanics behind site fidelity and home ranges. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Development of Indigenous Basic Interest Scales: Re-Structuring the Icelandic Interest Space

    Science.gov (United States)

    Einarsdottir, Sif; Eyjolfsdottir, Katrin Osk; Rounds, James

    2013-01-01

    The present investigation used an emic approach to develop a set of Icelandic indigenous basic interest scales. An indigenous item pool that is representative of the Icelandic labor market was administered to three samples (N = 1043, 1368, and 2218) of upper secondary and higher education students in two studies. A series of item level cluster and…

  7. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  8. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    Science.gov (United States)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help

  9. Micro scale spatial relationships in urban studies : The relationship between private and public space and its impact on street life

    NARCIS (Netherlands)

    Van Nes, A.; Lopez, M.J.J.

    2007-01-01

    Research on urban environment by means of space syntax theory and methods tends to focus on macro scale spatial conditions. However, micro scale conditions should not be neglected. In research on street life and dispersal of crime in urban areas, it became inevitable to pay attention to the

  10. Two-scale approach to oscillatory singularly perturbed transport equations

    CERN Document Server

    Frénod, Emmanuel

    2017-01-01

    This book presents the classical results of the two-scale convergence theory and explains – using several figures – why it works. It then shows how to use this theory to homogenize ordinary differential equations with oscillating coefficients as well as oscillatory singularly perturbed ordinary differential equations. In addition, it explores the homogenization of hyperbolic partial differential equations with oscillating coefficients and linear oscillatory singularly perturbed hyperbolic partial differential equations. Further, it introduces readers to the two-scale numerical methods that can be built from the previous approaches to solve oscillatory singularly perturbed transport equations (ODE and hyperbolic PDE) and demonstrates how they can be used efficiently. This book appeals to master’s and PhD students interested in homogenization and numerics, as well as to the Iter community.

  11. Measures for minimizing radiation hazardous to the environment in the advent of large-scale space commercialization

    International Nuclear Information System (INIS)

    Murthy, S.N.

    1990-01-01

    The nature of hazardous effects from radio-frequency (RF), light, infrared, and nuclear radiation on human and other biological species in the advent of large-scale space commercialization is considered. Attention is focused on RF/microwave radiation from earth antennas and domestic picture phone communication links, exposure to microwave radiation from space solar-power satellites, and the continuous transmission of information from spacecraft as well as laser radiation from space. Measures for preventing and/or reducing these effects are suggested, including the use of interlocks for cutting off radiation toward ground, off-pointing microwave energy beams in cases of altitude failure, limiting the satellite off-axis gain data-rate product, the use of reflective materials on buildings and in personnel clothing to protect from space-borne lasers, and underwater colonies in cases of high-power lasers. For nuclear-power satellites, deposition in stable points in the solar system is proposed. 12 refs

  12. A semiclassical approach to many-body interference in Fock-space

    Energy Technology Data Exchange (ETDEWEB)

    Engl, Thomas

    2015-11-01

    Many-body systems draw ever more physicists' attention. Such an increase of interest often comes along with the development of new theoretical methods. In this thesis, a non-perturbative semiclassical approach is developed, which allows to analytically study many-body interference effects both in bosonic and fermionic Fock space and is expected to be applicable to many research areas in physics ranging from Quantum Optics and Ultracold Atoms to Solid State Theory and maybe even High Energy Physics. After the derivation of the semiclassical approximation, which is valid in the limit of large total number of particles, first applications manifesting the presence of many-body interference effects are shown. Some of them are confirmed numerically thus verifying the semiclassical predictions. Among these results are coherent back-/forward-scattering in bosonic and fermionic Fock space as well as a many-body spin echo, to name only the two most important ones.

  13. Mesoscale to Synoptic Scale Cloud Variability

    Science.gov (United States)

    Rossow, William B.

    1998-01-01

    The atmospheric circulation and its interaction with the oceanic circulation involve non-linear and non-local exchanges of energy and water over a very large range of space and time scales. These exchanges are revealed, in part, by the related variations of clouds, which occur on a similar range of scales as the atmospheric motions that produce them. Collection of comprehensive measurements of the properties of the atmosphere, clouds and surface allows for diagnosis of some of these exchanges. The use of a multi-satellite-network approach by the International Satellite Cloud Climatology Project (ISCCP) comes closest to providing complete coverage of the relevant range space and time scales over which the clouds, atmosphere and ocean vary. A nearly 15-yr dataset is now available that covers the range from 3 hr and 30 km to decade and planetary. This paper considers three topics: (1) cloud variations at the smallest scales and how they may influence radiation-cloud interactions, and (2) cloud variations at "moderate" scales and how they may cause natural climate variability, and (3) cloud variations at the largest scales and how they affect the climate. The emphasis in this discussion is on the more mature subject of cloud-radiation interactions. There is now a need to begin similar detailed diagnostic studies of water exchange processes.

  14. A multi-scale approach of fluvial biogeomorphic dynamics using photogrammetry.

    Science.gov (United States)

    Hortobágyi, Borbála; Corenblit, Dov; Vautier, Franck; Steiger, Johannes; Roussel, Erwan; Burkart, Andreas; Peiry, Jean-Luc

    2017-11-01

    Over the last twenty years, significant technical advances turned photogrammetry into a relevant tool for the integrated analysis of biogeomorphic cross-scale interactions within vegetated fluvial corridors, which will largely contribute to the development and improvement of self-sustainable river restoration efforts. Here, we propose a cost-effective, easily reproducible approach based on stereophotogrammetry and Structure from Motion (SfM) technique to study feedbacks between fluvial geomorphology and riparian vegetation at different nested spatiotemporal scales. We combined different photogrammetric methods and thus were able to investigate biogeomorphic feedbacks at all three spatial scales (i.e., corridor, alluvial bar and micro-site) and at three different temporal scales, i.e., present, recent past and long term evolution on a diversified riparian landscape mosaic. We evaluate the performance and the limits of photogrammetric methods by targeting a set of fundamental parameters necessary to study biogeomorphic feedbacks at each of the three nested spatial scales and, when possible, propose appropriate solutions. The RMSE varies between 0.01 and 2 m depending on spatial scale and photogrammetric methods. Despite some remaining difficulties to properly apply them with current technologies under all circumstances in fluvial biogeomorphic studies, e.g. the detection of vegetation density or landform topography under a dense vegetation canopy, we suggest that photogrammetry is a promising instrument for the quantification of biogeomorphic feedbacks at nested spatial scales within river systems and for developing appropriate river management tools and strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-01

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described

  16. WORKSHOP: Inner space - outer space

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    During the first week of May, the Fermilab theoretical astrophysics group hosted an international conference on science at the interface of particle physics and cosmology/astrophysics. The conference (Inner Space-Outer Space) was attended by a very diverse group of more than 200 physical scientists, including astronomers, astrophysicists, cosmologists, low-temperature physicists, and elementary particle theorists and experimentalists. The common interest which brought this diverse group to gether is the connection between physics on the smallest scale probed by man - the realm of elementary particle physics - and physics on the largest scale imaginable (the entire Universe) - the realm of cosmology

  17. WORKSHOP: Inner space - outer space

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1984-09-15

    During the first week of May, the Fermilab theoretical astrophysics group hosted an international conference on science at the interface of particle physics and cosmology/astrophysics. The conference (Inner Space-Outer Space) was attended by a very diverse group of more than 200 physical scientists, including astronomers, astrophysicists, cosmologists, low-temperature physicists, and elementary particle theorists and experimentalists. The common interest which brought this diverse group to gether is the connection between physics on the smallest scale probed by man - the realm of elementary particle physics - and physics on the largest scale imaginable (the entire Universe) - the realm of cosmology.

  18. Distribution function approach to redshift space distortions. Part II: N-body simulations

    International Nuclear Information System (INIS)

    Okumura, Teppei; Seljak, Uroš; McDonald, Patrick; Desjacques, Vincent

    2012-01-01

    Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. We present these terms and investigate their contribution to the total as a function of wavevector k. For μ 2 the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ∼ 0.015hMpc −1 , 10% at k ∼ 0.05hMpc −1 at z = 0, while for k > 0.15hMpc −1 they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ 4 term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc −1 . For μ 6 and μ 8 we find it has very little power for k −1 , shooting up by 2–3 orders of magnitude between k −1 and k −1 . We also compare the expansion to the full 2-d P ss (k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of P ss (k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ −1 at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into Fo

  19. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  20. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  1. Some applications of nanometer scale structures for current and future X-ray space research

    DEFF Research Database (Denmark)

    Christensen, Finn Erland; Abdali, S; Frederiksen, P K

    1994-01-01

    Nanometer scale structures such as multilayers, gratings and natural crystals are playing an increasing role in spectroscopic applications for X-ray astrophysics. A few examples are briefly described as an introduction to current and planned applications pursued at the Danish Space Research...... Institute in collaboration with the FOM Institute for Plasma Physics, Nieuwegein, the Max-Planck-Institut für Extraterrestrische Physik, Aussenstelle Berlin, the Space Research Institute, Russian Academy of Sciences, the Smithsonian Astrophysical Observatory, Ovonics Synthetic Materials Company and Lawrence...... Livermore National Laboratory. These examples include : 1. the application of multilayered Si crystals for simultaneous spectroscopy in two energy bands one centred around the SK-emission near 2.45 keV and the other below the CK absorption edge at 0.284 keV; 2. the use of in-depth graded period multilayer...

  2. Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces

    Science.gov (United States)

    Coti Zelati, Michele

    2018-02-01

    We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.

  3. Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach

    International Nuclear Information System (INIS)

    Pin, F.G.; Tulloch, F.A.

    1996-01-01

    A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach

  4. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    Science.gov (United States)

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  5. MUSIC: MUlti-Scale Initial Conditions

    Science.gov (United States)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  6. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  7. Giant monopole transition densities within the local scale ATDHF approach

    International Nuclear Information System (INIS)

    Dimitrova, S.S.; Petkov, I.Zh.; Stoitsov, M.V.

    1986-01-01

    Transition densities for 12 C, 16 O, 28 Si, 32 S, 40 Ca, 48 Ca, 56 Ni, 90 Zr, 208 Pb even-even nuclei corresponding to nuclear glant monopole resonances obtained within a local-scale adiabatic time-dependent Hartree-Fook approach in terms of effective Skyrme-type forces SkM and S3. The approach, the particular form and all necessary coefficients of these transition densities are reported. They are of a simple analytical form and may be directly used for example in analyses of particle inelastic scattering on nuclei by distorted wave method and a such a way allowing a test of the theoretical interpretation of giant monopole resonances

  8. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  9. Magnetospheric Multiscale (MMS) Observation of Plasma Velocity-Space Cascade Processes

    Science.gov (United States)

    Parashar, T. N.; Servidio, S.; Matthaeus, W. H.; Chasapis, A.; Perrone, D.; Valentini, F.; Veltri, P.; Gershman, D. J.; Schwartz, S. J.; Giles, B. L.; Fuselier, S. A.; Phan, T.; Burch, J.

    2017-12-01

    Plasma turbulence is investigated using high-resolution ion velocity distributions, measured by theMagnetospheric Multiscale Mission (MMS) in the Earth's magnetosheath. The particle distributionmanifests large fluctuations, suggesting a cascade-like process in velocity space, invoked by theoristsfor many years. This complex velocity space structure is investigated using a three-dimensional Hermitetransform that reveals a power law distribution of moments. A Kolmogorov approach leads directlyto a range of predictions for this phase-space cascade. The scaling theory is in agreement withobservations, suggesting a new path for the study of plasma turbulence in weakly collisional spaceand astrophysical plasmas.

  10. Space Colonization Using Space-Elevators from Phobos

    Science.gov (United States)

    Weinstein, Leonard M.

    2003-01-01

    A novel approach is examined for creating an industrial civilization beyond Earth. The approach would take advantage of the unique configuration of Mars and its moon Phobos to make a transportation system capable of raising mass from the surface of Mars to space at a low cost. Mars would be used as the primary location for support personnel and infrastructure. Phobos would be used as a source of raw materials for space-based activity, and as an anchor for tethered carbon-nanotube-based space-elevators. One space-elevator would terminate at the upper edge of Mars' atmosphere. Small craft would be launched from Mars' surface to rendezvous with the moving elevator tip and their payloads detached and raised with solar powered loop elevators to Phobos. Another space-elevator would be extended outward from Phobos to launch craft toward the Earth/Moon system or the asteroid belt. The outward tip would also be used to catch arriving craft. This approach would allow Mars to be colonized, and allow transportation of people and supplies from Mars to support the space industry. In addition, large quantities of material obtained from Phobos could be used to construct space habitats and also supply propellant and material for space industry in the Earth/Moon system as well as around Mars.

  11. Noninvariance of Space and Time Scale Ranges under a Lorentz Transformation and the Implications for the Numerical Study of Relativistic Systems

    International Nuclear Information System (INIS)

    Vay, J.-L.; Vay, J.-L.

    2007-01-01

    We present an analysis which shows that the ranges of space and time scales spanned by a system are not invariant under the Lorentz transformation. This implies the existence of a frame of reference which minimizes an aggregate measure of the range of space and time scales. Such a frame is derived for example cases: free electron laser, laser-plasma accelerator, and particle beam interacting with electron clouds. Implications for experimental, theoretical and numerical studies are discussed. The most immediate relevance is the reduction by orders of magnitude in computer simulation run times for such systems

  12. Using Citizen Science Observations to Model Species Distributions Over Space, Through Time, and Across Scales

    Science.gov (United States)

    Kelling, S.

    2017-12-01

    The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.

  13. Understanding space weather with new physical, mathematical and philosophical approaches

    Science.gov (United States)

    Mateev, Lachezar; Velinov, Peter; Tassev, Yordan

    2016-07-01

    The actual problems of solar-terrestrial physics, in particular of space weather are related to the prediction of the space environment state and are solved by means of different analyses and models. The development of these investigations can be considered also from another side. This is the philosophical and mathematical approach towards this physical reality. What does it constitute? We have a set of physical processes which occur in the Sun and interplanetary space. All these processes interact with each other and simultaneously participate in the general process which forms the space weather. Let us now consider the Leibniz's monads (G.W. von Leibniz, 1714, Monadologie, Wien; Id., 1710, Théodicée, Amsterdam) and use some of their properties. There are total 90 theses for monads in the Leibniz's work (1714), f.e. "(1) The Monad, of which we shall here speak, is nothing but a simple substance, which enters into compounds. By 'simple' is meant 'without parts'. (Theod. 10.); … (56) Now this connexion or adaptation of all created things to each and of each to all, means that each simple substance has relations which express all the others, and, consequently, that it is a perpetual living mirror of the universe. (Theod. 130, 360.); (59) … this universal harmony, according to which every substance exactly expresses all others through the relations it has with them. (63) … every Monad is, in its own way, a mirror of the universe, and the universe is ruled according to a perfect order. (Theod. 403.)", etc. Let us introduce in the properties of monads instead of the word "monad" the word "process". We obtain the following statement: Each process reflects all other processes and all other processes reflect this process. This analogy is not formal at all, it reflects accurately the relation between the physical processes and their unity. The category monad which in the Leibniz's Monadology reflects generally the philosophical sense is fully identical with the

  14. Cell culture experiments planned for the space bioreactor

    Science.gov (United States)

    Morrison, Dennis R.; Cross, John H.

    1987-01-01

    Culturing of cells in a pilot-scale bioreactor remains to be done in microgravity. An approach is presented based on several studies of cell culture systems. Previous and current cell culture research in microgravity which is specifically directed towards development of a space bioprocess is described. Cell culture experiments planned for a microgravity sciences mission are described in abstract form.

  15. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    Science.gov (United States)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  16. Path integral approach for superintegrable potentials on spaces of non-constant curvature. Pt. 2. Darboux spaces D{sub III} and D{sub IV}

    Energy Technology Data Exchange (ETDEWEB)

    Grosche, C. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Pogosyan, G.S. [Joint Inst. of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics]|[Guadalajara Univ., Jalisco (Mexico). Dept. de Matematicas CUCEI; Sissakian, A.N. [Joint Inst. of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2006-08-15

    This is the second paper on the path integral approach of superintegrable systems on Darboux spaces, spaces of non-constant curvature. We analyze in the spaces D{sub III} and D{sub IV} five respectively four superintegrable potentials, which were first given by Kalnins et al. We are able to evaluate the path integral in most of the separating coordinate systems, leading to expressions for the Green functions, the discrete and continuous wave-functions, and the discrete energy-spectra. In some cases, however, the discrete spectrum cannot be stated explicitly, because it is determined by a higher order polynomial equation. We show that also the free motion in Darboux space of type III can contain bound states, provided the boundary conditions are appropriate. We state the energy spectrum and the wave-functions, respectively. (orig.)

  17. Technique for forcing high Reynolds number isotropic turbulence in physical space

    Science.gov (United States)

    Palmore, John A.; Desjardins, Olivier

    2018-03-01

    Many common engineering problems involve the study of turbulence interaction with other physical processes. For many such physical processes, solutions are expressed most naturally in physical space, necessitating the use of physical space solutions. For simulating isotropic turbulence in physical space, linear forcing is a commonly used strategy because it produces realistic turbulence in an easy-to-implement formulation. However, the method resolves a smaller range of scales on the same mesh than spectral forcing. We propose an alternative approach for turbulence forcing in physical space that uses the low-pass filtered velocity field as the basis of the forcing term. This method is shown to double the range of scales captured by linear forcing while maintaining the flexibility and low computational cost of the original method. This translates to a 60% increase of the Taylor microscale Reynolds number on the same mesh. An extension is made to scalar mixing wherein a scalar field is forced to have an arbitrarily chosen, constant variance. Filtered linear forcing of the scalar field allows for control over the length scale of scalar injection, which could be important when simulating scalar mixing.

  18. Mentoring SFRM: A New Approach to International Space Station Flight Control Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2009-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (Operator) to a basic level of effectiveness in 1 year. SFRM training uses a twopronged approach to expediting operator certification: 1) imbed SFRM skills training into all Operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills.

  19. On velocity-space sensitivity of fast-ion D-alpha spectroscopy

    DEFF Research Database (Denmark)

    Salewski, Mirko; Geiger, B.; Moseev, Dmitry

    2014-01-01

    The velocity-space observation regions and sensitivities in fast-ion Dα (FIDA) spectroscopy measurements are often described by so-called weight functions. Here we derive expressions for FIDA weight functions accounting for the Doppler shift, Stark splitting, and the charge-exchange reaction...... and electron transition probabilities. Our approach yields an efficient way to calculate correctly scaled FIDA weight functions and implies simple analytic expressions for their boundaries that separate the triangular observable regions in (v‖, v⊥)-space from the unobservable regions. These boundaries...

  20. Responses of Cloud Type Distributions to the Large-Scale Dynamical Circulation: Water Budget-Related Dynamical Phase Space and Dynamical Regimes

    Science.gov (United States)

    Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.

    2015-01-01

    Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.

  1. Real Space Approach to CMB deboosting

    CERN Document Server

    Yoho, Amanda; Starkman, Glenn D.; Pereira, Thiago S.

    2013-01-01

    The effect of our Galaxy's motion through the Cosmic Microwave Background rest frame, which aberrates and Doppler shifts incoming photons measured by current CMB experiments, has been shown to produce mode-mixing in the multipole space temperature coefficients. However, multipole space determinations are subject to many difficulties, and a real-space analysis can provide a straightforward alternative. In this work we describe a numerical method for removing Lorentz- boost effects from real-space temperature maps. We show that to deboost a map so that one can accurately extract the temperature power spectrum requires calculating the boost kernel at a finer pixelization than one might naively expect. In idealized cases that allow for easy comparison to analytic results, we have confirmed that there is indeed mode mixing among the spherical harmonic coefficients of the temperature. We find that using a boost kernel calculated at Nside=8192 leads to a 1% bias in the binned boosted power spectrum at l~2000, while ...

  2. Role of jet spacing and strut geometry on the formation of large scale structures and mixing characteristics

    Science.gov (United States)

    Soni, Rahul Kumar; De, Ashoke

    2018-05-01

    The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.

  3. A Programmatic and Engineering Approach to the Development of a Nuclear Thermal Rocket for Space Exploration

    Science.gov (United States)

    Bordelon, Wayne J., Jr.; Ballard, Rick O.; Gerrish, Harold P., Jr.

    2006-01-01

    With the announcement of the Vision for Space Exploration on January 14, 2004, there has been a renewed interest in nuclear thermal propulsion. Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions; however, the cost to develop a nuclear thermal rocket engine system is uncertain. Key to determining the engine development cost will be the engine requirements, the technology used in the development and the development approach. The engine requirements and technology selection have not been defined and are awaiting definition of the Mars architecture and vehicle definitions. The paper discusses an engine development approach in light of top-level strategic questions and considerations for nuclear thermal propulsion and provides a suggested approach based on work conducted at the NASA Marshall Space Flight Center to support planning and requirements for the Prometheus Power and Propulsion Office. This work is intended to help support the development of a comprehensive strategy for nuclear thermal propulsion, to help reduce the uncertainty in the development cost estimate, and to help assess the potential value of and need for nuclear thermal propulsion for a human Mars mission.

  4. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  5. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  6. Learning in Earth and Space Science: A Review of Conceptual Change Instructional Approaches

    Science.gov (United States)

    Mills, Reece; Tomas, Louisa; Lewthwaite, Brian

    2016-01-01

    In response to calls for research into effective instruction in the Earth and space sciences, and to identify directions for future research, this systematic review of the literature explores research into instructional approaches designed to facilitate conceptual change. In total, 52 studies were identified and analyzed. Analysis focused on the…

  7. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Science.gov (United States)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  8. Combining Statistical Methodologies in Water Quality Monitoring in a Hydrological Basin - Space and Time Approaches

    OpenAIRE

    Costa, Marco; A. Manuela Gonçalves

    2012-01-01

    In this work are discussed some statistical approaches that combine multivariate statistical techniques and time series analysis in order to describe and model spatial patterns and temporal evolution by observing hydrological series of water quality variables recorded in time and space. These approaches are illustrated with a data set collected in the River Ave hydrological basin located in the Northwest region of Portugal.

  9. Comparison of distal soft-tissue procedures combined with a distal chevron osteotomy for moderate to severe hallux valgus: first web-space versus transarticular approach.

    Science.gov (United States)

    Park, Yu-Bok; Lee, Keun-Bae; Kim, Sung-Kyu; Seon, Jong-Keun; Lee, Jun-Young

    2013-11-06

    There are two surgical approaches for distal soft-tissue procedures for the correction of hallux valgus-the dorsal first web-space approach, and the medial transarticular approach. The purpose of this study was to compare the outcomes achieved after use of either of these approaches combined with a distal chevron osteotomy in patients with moderate to severe hallux valgus. One hundred and twenty-two female patients (122 feet) who underwent a distal chevron osteotomy as part of a distal soft-tissue procedure for the treatment of symptomatic unilateral moderate to severe hallux valgus constituted the study cohort. The 122 feet were randomly divided into two groups: namely, a dorsal first web-space approach (group D; sixty feet) and a medial transarticular approach (group M; sixty-two feet). The clinical and radiographic results of the two groups were compared at a mean follow-up time of thirty-eight months. The American Orthopaedic Foot & Ankle Society (AOFAS) hindfoot scale hallux metatarsophalangeal-interphalangeal scores improved from a mean and standard deviation of 55.5 ± 12.8 points preoperatively to 93.5 ± 6.3 points at the final follow-up in group D and from 54.9 ± 12.6 points preoperatively to 93.6 ± 6.2 points at the final follow-up in group M. The mean hallux valgus angle in groups D and M was reduced from 32.2° ± 6.3° and 33.1° ± 8.4° preoperatively to 10.5° ± 5.5° and 9.9° ± 5.5°, respectively, at the time of final follow-up. The mean first intermetatarsal angle in groups D and M was reduced from 15.0° ± 2.8° and 15.3° ± 2.7° preoperatively to 6.5° ± 2.2° and 6.3° ± 2.4°, respectively, at the final follow-up. The clinical and radiographic outcomes were not significantly different between the two groups. The final clinical and radiographic outcomes between the two approaches for distal soft-tissue procedures were comparable and equally successful. Accordingly, the results of this study suggest that the medial transarticular

  10. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    Science.gov (United States)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  11. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    Science.gov (United States)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  12. Scale relativity: from quantum mechanics to chaotic dynamics.

    Science.gov (United States)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  13. Activity markers and household space in Swahili urban contexts: An integrated geoarchaeological approach

    DEFF Research Database (Denmark)

    Wynne-Jones, Stephanie; Sulas, Federica

    , this paper draws from recent work at a Swahili urban site to illustrate the potential and challenges of an integrated geoarchaeological approach to the study of household space. The site of Songo Mnara (14th–16thc. AD) thrived as a Swahili stonetown off the coast of Tanzania. Here, our work has concentrated...

  14. Phase space approach to quantum dynamics

    International Nuclear Information System (INIS)

    Leboeuf, P.

    1991-03-01

    The Schroedinger equation for the time propagation of states of a quantised two-dimensional spherical phase space is replaced by the dynamics of a system of N particles lying in phase space. This is done through factorization formulae of analytic function theory arising in coherent-state representation, the 'particles' being the zeros of the quantum state. For linear Hamiltonians, like a spin in a uniform magnetic field, the motion of the particles is classical. However, non-linear terms induce interactions between the particles. Their time propagation is studied and it is shown that, contrary to integrable systems, for chaotic maps they tend to fill, as their classical counterpart, the whole phase space. (author) 13 refs., 3 figs

  15. An innovative approach to space education

    Science.gov (United States)

    Marton, Christine; Berinstain, Alain B.; Criswick, John

    1994-01-01

    At present, Canada does not have enough scientists to be competitive in the global economy, which is rapidly changing from a reliance on natural resources and industry to information and technology. Space is the final frontier and it is a multidisciplinary endeavor. It requires a knowledge of science and math, as well as non-science areas such as architecture and law. Thus, it can attract a large number of students with a diverse range of interests and career goals. An overview is presented of the space education program designed by Canadian Alumni of the International Space University (CAISU) to encourage students to pursue studies and careers in science and technology and to improve science literacy in Canada.

  16. A Confirmatory Factor Analysis on the Attitude Scale of Constructivist Approach for Science Teachers

    Directory of Open Access Journals (Sweden)

    E. Evrekli

    2010-11-01

    Full Text Available Underlining the importance of teachers for the constructivist approach, the present study attempts to develop “Attitude Scale of Construc¬tivist Approach for Science Teachers (ASCAST”. The pre-applications of the scale were administered to a total of 210 science teachers; however, the data obtained from 5 teachers were excluded from the analysis. As a result of the analysis of the data obtained from the pre-applications, it was found that the scale could have a single factor structure, which was tested using the confir¬matory factor analysis. As a result of the initial confirmatory factor analysis, the values of fit were examined and found to be low. Subsequently, by exam¬ining the modification indices, error covariance was added between items 23 and 24 and the model was tested once again. The added error covariance led to a significant improvement in the model, producing values of fit suitable for limit values. Thus, it was concluded that the scale could be employed with a single factor. The explained variance value for the scale developed with a sin¬gle factor structure was calculated to be 50.43% and its reliability was found to be .93. The results obtained suggest that the scale possesses reliable-valid characteristics and could be used in further studies.

  17. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  18. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  19. Biological challenges of true space settlement

    Science.gov (United States)

    Mankins, John C.; Mankins, Willa M.; Walter, Helen

    2018-05-01

    "Space Settlements" - i.e., permanent human communities beyond Earth's biosphere - have been discussed within the space advocacy community since the 1970s. Now, with the end of the International Space Station (ISS) program fast approaching (planned for 2024-2025) and the advent of low cost Earth-to-orbit (ETO) transportation in the near future, the concept is coming once more into mainstream. Considerable attention has been focused on various issues associated with the engineering and human health considerations of space settlement such as artificial gravity and radiation shielding. However, relatively little attention has been given to the biological implications of a self-sufficient space settlement. Three fundamental questions are explored in this paper: (1) what are the biological "foundations" of truly self-sufficient space settlements in the foreseeable future, (2) what is the minimum scale for such self-sustaining human settlements, and (3) what are the integrated biologically-driven system requirements for such settlements? The paper examines briefly the implications of the answers to these questions in relevant potential settings (including free space, the Moon and Mars). Finally, this paper suggests relevant directions for future research and development in order for such space settlements to become viable in the future.

  20. Scale-invariant gravity: geometrodynamics

    International Nuclear Information System (INIS)

    Anderson, Edward; Barbour, Julian; Foster, Brendan; Murchadha, Niall O

    2003-01-01

    We present a scale-invariant theory, conformal gravity, which closely resembles the geometrodynamical formulation of general relativity (GR). While previous attempts to create scale-invariant theories of gravity have been based on Weyl's idea of a compensating field, our direct approach dispenses with this and is built by extension of the method of best matching w.r.t. scaling developed in the parallel particle dynamics paper by one of the authors. In spatially compact GR, there is an infinity of degrees of freedom that describe the shape of 3-space which interact with a single volume degree of freedom. In conformal gravity, the shape degrees of freedom remain, but the volume is no longer a dynamical variable. Further theories and formulations related to GR and conformal gravity are presented. Conformal gravity is successfully coupled to scalars and the gauge fields of nature. It should describe the solar system observations as well as GR does, but its cosmology and quantization will be completely different

  1. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  2. Extension of Space Food Shelf Life Through Hurdle Approach

    Science.gov (United States)

    Cooper, M. R.; Sirmons, T. A.; Froio-Blumsack, D.; Mohr, L.; Young, M.; Douglas, G. L.

    2018-01-01

    The processed and prepackaged space food system is the main source of crew nutrition, and hence central to astronaut health and performance. Unfortunately, space food quality and nutrition degrade to unacceptable levels in two to three years with current food stabilization technologies. Future exploration missions will require a food system that remains safe, acceptable and nutritious through five years of storage within vehicle resource constraints. The potential of stabilization technologies (alternative storage temperatures, processing, formulation, ingredient source, packaging, and preparation procedures), when combined in hurdle approach, to mitigate quality and nutritional degradation is being assessed. Sixteen representative foods from the International Space Station food system were chosen for production and analysis and will be evaluated initially and at one, three, and five years with potential for analysis at seven years if necessary. Analysis includes changes in color, texture, nutrition, sensory quality, and rehydration ratio when applicable. The food samples will be stored at -20 C, 4 C, and 21 C. Select food samples will also be evaluated at -80 C to determine the impacts of ultra-cold storage after one and five years. Packaging film barrier properties and mechanical integrity will be assessed before and after processing and storage. At the study conclusion, if tested hurdles are adequate, formulation, processing, and storage combinations will be uniquely identified for processed food matrices to achieve a five-year shelf life. This study will provide one of the most comprehensive investigations of long duration food stability ever completed, and the achievement of extended food system stability will have profound impacts to health and performance for spaceflight crews and for relief efforts and military applications on Earth.

  3. Space commerce in a global economy - Comparison of international approaches to commercial space

    Science.gov (United States)

    Stone, Barbara A.; Kleber, Peter

    1992-01-01

    A historical perspective, current status, and comparison of national government/commercial space industry relationships in the United States and Europe are presented. It is noted that space technology has been developed and used primarily to meet the needs of civil and military government initiatives. Two future trends of space technology development include new space enterprises, and the national drive to achieve a more competitive global economic position.

  4. AI Techniques for Space: The APSI Approach

    Science.gov (United States)

    Steel, R.; Niézette, M.; Cesta, A.; Verfaille, G., Lavagna, M.; Donati, A.

    2009-05-01

    This paper will outline the framework and tools developed under the Advanced Planning and Schedule Initiative (APSI) study performed by VEGA for the European Space Agency in collaboration with three academic institutions, ISTC-CNR, ONERA, and Politecnico di Milano. We will start by illustrating the background history to APSI and why it was needed, giving a brief summary of all the partners within the project and the rolls they played within it. We will then take a closer look at what APSI actually consists of, showing the techniques that were used and detailing the framework that was developed within the scope of the project. We will follow this with an elaboration on the three demonstration test scenarios that have been developed as part of the project, illustrating the re-use and synergies between the three cases along the way. We will finally conclude with a summary of some pros and cons of the approach devised during the project and outline future directions to be further investigated and expanded on within the context of the work performed within the project.

  5. An Innovative Approach to Balancing Chemical-Reaction Equations: A Simplified Matrix-Inversion Technique for Determining The Matrix Null Space

    OpenAIRE

    Thorne, Lawrence R.

    2011-01-01

    I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...

  6. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    Science.gov (United States)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  7. Fast Laplace solver approach to pore-scale permeability

    Science.gov (United States)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  8. A Science Cloud: OneSpaceNet

    Science.gov (United States)

    Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.

    2010-12-01

    Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage

  9. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    Science.gov (United States)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open

  10. Environmental Remediation Full-Scale Implementation: Back to Simple Microbial Massive Culture Approaches

    Directory of Open Access Journals (Sweden)

    Agung Syakti

    2010-10-01

    Full Text Available Using bioaugmentation and biostimulation approach for contaminated soil bioremediation were investigated and implemented on field scale. We combine those approaches by culturing massively the petrophilic indigenous microorganisms from chronically contaminated soil enriched by mixed manure. Through these methods, bioremediation performance revealed promising results in removing the petroleum hydrocarbons comparatively using metabolite by product such as biosurfactant, specific enzymes and other extra-cellular product which are considered as a difficult task and will impact on cost increase.

  11. Large-scaled biomonitoring of trace-element air pollution: goals and approaches

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    2000-01-01

    Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)

  12. Optimization of the graph model of the water conduit network, based on the approach of search space reducing

    Science.gov (United States)

    Korovin, Iakov S.; Tkachenko, Maxim G.

    2018-03-01

    In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.

  13. Space-to-Space Power Beaming Enabling High Performance Rapid Geocentric Orbit Transfer

    Science.gov (United States)

    Dankanich, John W.; Vassallo, Corinne; Tadge, Megan

    2015-01-01

    The use of electric propulsion is more prevalent than ever, with industry pursuing all electric orbit transfers. Electric propulsion provides high mass utilization through efficient propellant transfer. However, the transfer times become detrimental as the delta V transitions from near-impulsive to low-thrust. Increasing power and therefore thrust has diminishing returns as the increasing mass of the power system limits the potential acceleration of the spacecraft. By using space-to-space power beaming, the power system can be decoupled from the spacecraft and allow significantly higher spacecraft alpha (W/kg) and therefore enable significantly higher accelerations while maintaining high performance. This project assesses the efficacy of space-to-space power beaming to enable rapid orbit transfer while maintaining high mass utilization. Concept assessment requires integrated techniques for low-thrust orbit transfer steering laws, efficient large-scale rectenna systems, and satellite constellation configuration optimization. This project includes the development of an integrated tool with implementation of IPOPT, Q-Law, and power-beaming models. The results highlight the viability of the concept, limits and paths to infusion, and comparison to state-of-the-art capabilities. The results indicate the viability of power beaming for what may be the only approach for achieving the desired transit times with high specific impulse.

  14. A study of space shuttle energy management, approach and landing analysis

    Science.gov (United States)

    Morth, R.

    1973-01-01

    The steering system of the space shuttle vehicle is presented for the several hundred miles of flight preceding landing. The guidance scheme is characterized by a spiral turn to dissipate excess potential energy (altitude) prior to a standard straight-in final approach. In addition, the system features pilot oriented control, drag brakes, phugoid damping, and a navigational capacity founded upon an inertial measurement unit and an on-board computer. Analytic formulas are used to calculate, represent, and insure the workability of the system's specifications

  15. Religion and Communication Spaces. A Semio-pragmatic Approach

    Directory of Open Access Journals (Sweden)

    Roger Odin

    2015-11-01

    Full Text Available Following the reflection initiated in his book The Spaces of Communication, Roger Odin suggests a new distinction between physical communication spaces and mental communication spaces (spaces that we have inside us. The suggestion is exemplified by three film analyses dedicated to the relationships between religion and communication.

  16. Modeling Fluid’s Dynamics with Master Equations in Ultrametric Spaces Representing the Treelike Structure of Capillary Networks

    Directory of Open Access Journals (Sweden)

    Andrei Khrennikov

    2016-07-01

    Full Text Available We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric diffusion. The images of p-adic fields, extracted from the real multiscale rock samples and from some reference images, are depicted. In this model the porous background is treated as the environment contributing to the coefficients of evolutionary equations. For the simplest trees, these equations are essentially less complicated than those with fractional differential operators which are commonly applied in geological studies looking for some fractional analogs to conventional Euclidean space but with anomalous scaling and diffusion properties. It is possible to solve the former equation analytically and, in particular, to find stationary solutions. The main aim of this paper is to attract the attention of researchers working on modeling of geological processes to the novel utrametric approach and to show some examples from the petroleum reservoir static and dynamic characterization, able to integrate the p-adic approach with multifractals, thermodynamics and scaling. We also present a non-mathematician friendly review of trees and ultrametric spaces and pseudo-differential operators on such spaces.

  17. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  18. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  19. A Reparametrization Approach for Dynamic Space-Time Models

    OpenAIRE

    Lee, Hyeyoung; Ghosh, Sujit K.

    2008-01-01

    Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to...

  20. An Open and Holistic Approach for Geo and Space Sciences

    Science.gov (United States)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Toshihiko, Iyemori; Yatagai, Akiyo; Koyama, Yukinobu; Murayama, Yasuhiro; King, Todd; Hughes, Steve; Fung, Shing; Galkin, Ivan; Hapgood, Mike; Belehaki, Anna

    2016-04-01

    Geo and space sciences thus far have been very successful, even often an open, cross-domain and holistic approach did not play an essential role. But this situation is changing rapidly. The research focus is shifting into more complex, non-linear and multi-domain specified phenomena, such as e.g. climate change or space environment. This kind of phenomena only can be understood step by step using the holistic idea. So, what is necessary for a successful cross-domain and holistic approach in geo and space sciences? Research and science in general become more and more dependent from a rich fundus of multi-domain data sources, related context information and the use of highly advanced technologies in data processing. Such buzzword phrases as Big Data and Deep Learning are reflecting this development. Big Data also addresses the real exponential growing of data and information produced by measurements or simulations. Deep Learning technology may help to detect new patterns and relationships in data describing high sophisticated natural phenomena. And further on, we should not forget science and humanities are only two sides of the same medal in the continuing human process of knowledge discovery. The concept of Open Data or in particular the open access to scientific data is addressing the free and open availability of -at least publicly founded and generated- data. The open availability of data covers the free use, reuse and redistribution of data which have been established with the formation of World Data Centers already more than 50 years ago. So, we should not forget, the foundation for open data is the responsibility of the individual scientist up until the big science institutions and organizations for a sustainable management of data. Other challenges are discovering and collecting the appropriate data, and preferably all of them or at least the majority of the right data. Therefore a network of individual or even better institutional catalog-based and at least

  1. Scaling up biomass gasifier use: an application-specific approach

    International Nuclear Information System (INIS)

    Ghosh, Debyani; Sagar, Ambuj D.; Kishore, V.V.N.

    2006-01-01

    Biomass energy accounts for about 11% of the global primary energy supply, and it is estimated that about 2 billion people worldwide depend on biomass for their energy needs. Yet, most of the use of biomass is in a primitive and inefficient manner, primarily in developing countries, leading to a host of adverse implications on human health, environment, workplace conditions, and social well being. Therefore, the utilization of biomass in a clean and efficient manner to deliver modern energy services to the world's poor remains an imperative for the development community. One possible approach to do this is through the use of biomass gasifiers. Although significant efforts have been directed towards developing and deploying biomass gasifiers in many countries, scaling up their dissemination remains an elusive goal. Based on an examination of biomass gasifier development, demonstration, and deployment efforts in India-a country with more than two decades of experiences in biomass gasifier development and dissemination, this article identifies a number of barriers that have hindered widespread deployment of biomass gasifier-based energy systems. It also suggests a possible approach for moving forward, which involves a focus on specific application areas that satisfy a set of criteria that are critical to deployment of biomass gasifiers, and then tailoring the scaling up strategy to the characteristics of the user groups for that application. Our technical, financial, economic and institutional analysis suggests an initial focus on four categories of applications-small and medium enterprises, the informal sector, biomass-processing industries, and some rural areas-may be particularly feasible and fruitful

  2. Robust control of uncertain dynamic systems a linear state space approach

    CERN Document Server

    Yedavalli, Rama K

    2014-01-01

    This textbook aims to provide a clear understanding of the various tools of analysis and design for robust stability and performance of uncertain dynamic systems. In model-based control design and analysis, mathematical models can never completely represent the “real world” system that is being modeled, and thus it is imperative to incorporate and accommodate a level of uncertainty into the models. This book directly addresses these issues from a deterministic uncertainty viewpoint and focuses on the interval parameter characterization of uncertain systems. Various tools of analysis and design are presented in a consolidated manner. This volume fills a current gap in published works by explicitly addressing the subject of control of dynamic systems from linear state space framework, namely using a time-domain, matrix-theory based approach. This book also: Presents and formulates the robustness problem in a linear state space model framework Illustrates various systems level methodologies with examples and...

  3. Spontaneous symmetry breaking in curved space-time

    International Nuclear Information System (INIS)

    Toms, D.J.

    1982-01-01

    An approach dealing with some of the complications which arise when studying spontaneous symmetry breaking beyond the tree-graph level in situations where the effective potential may not be used is discussed. These situations include quantum field theory on general curved backgrounds or in flat space-times with non-trivial topologies. Examples discussed are a twisted scalar field in S 1 xR 3 and instabilities in an expanding universe. From these it is seen that the topology and curvature of a space-time may affect the stability of the vacuum state. There can be critical length scales or times beyond which symmetries may be broken or restored in certain cases. These features are not present in Minkowski space-time and so would not show up in the usual types of early universe calculations. (U.K.)

  4. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  5. Self-affine scaling from non-integer phase-space partition in $\\pi^{+}p$ and $K^{+}p$ collisions at 250 GeV/$c$

    CERN Document Server

    Agababian, N M

    1998-01-01

    A factorial-moment analysis with real (integer and non-integer) phase space partition is applied to $\\pi^+$p and K$^+$p collisions at 250 GeV/$c$. Clear evidence is shown for self-affine rather than self-similar power-law scaling in multiparticle production. The three-dimensional self-affine second-order scaling exponent is determined to be 0.061$\\pm$0.010.

  6. A new approach for modeling and analysis of molten salt reactors using SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

    2013-07-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  7. A new approach for modeling and analysis of molten salt reactors using SCALE

    International Nuclear Information System (INIS)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C.

    2013-01-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  8. The balance space approach to multicriteria decision making—involving the decision maker

    OpenAIRE

    Ehrgott, M.

    2002-01-01

    The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prov...

  9. Data adaptive control parameter estimation for scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Dinklage, Andreas [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Dose, Volker [Max-Planck- Institut fuer Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2007-07-01

    Bayesian experimental design quantifies the utility of data expressed by the information gain. Data adaptive exploration determines the expected utility of a single new measurement using existing data and a data descriptive model. In other words, the method can be used for experimental planning. As an example for a multivariate linear case, we apply this method for constituting scaling laws of fusion devices. In detail, the scaling of the stellarator W7-AS is examined for a subset of {iota}=1/3 data. The impact of the existing data on the scaling exponents is presented. Furthermore, in control parameter space regions of high utility are identified which improve the accuracy of the scaling law. This approach is not restricted to the presented example only, but can also be extended to non-linear models.

  10. Acoustic mode coupling induced by shallow water nonlinear internal waves: sensitivity to environmental conditions and space-time scales of internal waves.

    Science.gov (United States)

    Colosi, John A

    2008-09-01

    While many results have been intuited from numerical simulation studies, the precise connections between shallow-water acoustic variability and the space-time scales of nonlinear internal waves (NLIWs) as well as the background environmental conditions have not been clearly established analytically. Two-dimensional coupled mode propagation through NLIWs is examined using a perturbation series solution in which each order n is associated with nth-order multiple scattering. Importantly, the perturbation solution gives resonance conditions that pick out specific NLIW scales that cause coupling, and seabed attenuation is demonstrated to broaden these resonances, fundamentally changing the coupling behavior at low frequency. Sound-speed inhomogeneities caused by internal solitary waves (ISWs) are primarily considered and the dependence of mode coupling on ISW amplitude, range width, depth structure, location relative to the source, and packet characteristics are delineated as a function of acoustic frequency. In addition, it is seen that significant energy transfer to modes with initially low or zero energy involves at least a second order scattering process. Under moderate scattering conditions, comparisons of first order, single scattering theoretical predictions to direct numerical simulation demonstrate the accuracy of the approach for acoustic frequencies upto 400 Hz and for single as well as multiple ISW wave packets.

  11. Topology-based description of the NCA cathode configurational space and an approach of its effective reduction

    Directory of Open Access Journals (Sweden)

    Zolotarev Pavel

    2018-01-01

    Full Text Available Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA cathode material approximant.

  12. Topology-based description of the NCA cathode configurational space and an approach of its effective reduction

    Science.gov (United States)

    Zolotarev, Pavel; Eremin, Roman

    2018-04-01

    Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.

  13. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  14. EXPERIMENTAL STUDIES ON DIFFICULTY OF EVACUATION FROM UNDERGROUND SPACES UNDER INUNDATED SITUATIONS USING REAL SCALE MODELS

    Science.gov (United States)

    Baba, Yasuyuki; Ishigaki, Taisuke; Toda, Keiichi; Nakagawa, Hajime

    Many urbanized cities in Japan are located in alluvial plains, and the vulnerability of urbanized areas to flood disaster is highlighted by flood attacks due to heavy rain fall or typhoons. Underground spaces located in the urbanized area are flood-prone areas, and the intrusion of flood watar into underground space inflicted severe damages on urban functions and infrastructures. In a similar way, low-lying areas like "bowl-shaped" depression and underpasses under highway and railroad bridges are also prone to floods. The underpasses are common sites of accidents of submerged vehicles, and severe damage including human damage occasionally occurs under flooding conditions. To reduce the damage due to inundation in underground space, needless to say, early evacuation is one of the most important countermeasures. This paper shows some experimental results of evacuation tests from underground spaces under inundated situations. The difficulities of the evacuation from underground space has been investigated by using real scale models (door, staircase and vehicle), and the limit for safety evacuation is discussed. From the results, it is found that water depth of 0.3 - 0.4m would be a critical situation for the evacuation from underground space through staircases and door and that 0.7 - 0.8m deep on the ground would be also a critical situation for safety evacuation though the doors of the vehicle. These criteria have some possibility to vary according to different inundated situations, and they are also influenced by the individual variation like the difference of physical strength. This means that these criteria requires cautious stance to use although they show a sort of an index of the limitation for saftty evacuation from underground space.

  15. A feasible approach to implement a commercial scale CANDU fuel manufacturing plant in Egypt

    International Nuclear Information System (INIS)

    El-Shehawy, I.; El-Sharaky, M.; Yasso, K.; Selim, I.; Graham, N.; Newington, D.

    1995-01-01

    Many planning scenarios have been examined to assess and evaluate the economic estimates for implementing a commercial scale CANDU fuel manufacturing plant in Egypt. The cost estimates indicated strong influence of the annual capital costs on total fuel manufacturing cost; this is particularly evident in a small initial plant where the proposed design output is only sufficient to supply reload fuel for a single CANDU-6 reactor. A modular approach is investigated as a possible way, to reduce the capital costs for a small initial fuel plant. In this approach the plant would do fuel assembly operations only and the remainder of a plant would be constructed and equipped in the stages when high production volumes can justify the capital expenses. Such approach seems economically feasible for implementing a small scale CANDU fuel manufacturing plant in developing countries such as Egypt and further improvement could be achieved over the years of operation. (author)

  16. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    Science.gov (United States)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  17. Linking biogeomorphic feedbacks from ecosystem engineer to landscape scale: a panarchy approach

    Science.gov (United States)

    Eichel, Jana

    2017-04-01

    Scale is a fundamental concept in both ecology and geomorphology. Therefore, scale-based approaches are a valuable tool to bridge the disciplines and improve the understanding of feedbacks between geomorphic processes, landforms, material and organisms and ecological processes in biogeomorphology. Yet, linkages between biogeomorphic feedbacks on different scales, e.g. between ecosystem engineering and landscape scale patterns and dynamics, are not well understood. A panarchy approach sensu Holling et al. (2002) can help to close this research gap and explain how structure and function are created in biogeomorphic ecosystems. Based on results from previous biogeomorphic research in Turtmann glacier foreland (Switzerland; Eichel, 2017; Eichel et al. 2013, 2016), a panarchy concept is presented for lateral moraine slope biogeomorphic ecosystems. It depicts biogeomorphic feedbacks on different spatiotemporal scales as a set of nested adaptive cycles and links them by 'remember' and 'revolt' connections. On a small scale (cm2 - m2; seconds to years), the life cycle of the ecosystem engineer Dryas octopetala L. is considered as an adaptive cycle. Biogeomorphic succession within patches created by geomorphic processes represents an intermediate scale adaptive cycle (m2 - ha, years to decades), while geomorphic and ecologic pattern development at a landscape scale (ha - km2, decades to centuries) can be illustrated by an adaptive cycle of ‚biogeomorphic patch dynamics' (Eichel, 2017). In the panarchy, revolt connections link the smaller scale adaptive cycles to larger scale cycles: on lateral moraine slopes, the development of ecosystem engineer biomass and cover controls the engineering threshold of the biogeomorphic feedback window (Eichel et al., 2016) and therefore the onset of the biogeomorphic phase during biogeomorphic succession. In this phase, engineer patches and biogeomorphic structures can be created in the patch mosaic of the landscape. Remember connections

  18. System resiliency quantification using non-state-space and state-space analytic models

    International Nuclear Information System (INIS)

    Ghosh, Rahul; Kim, DongSeong; Trivedi, Kishor S.

    2013-01-01

    Resiliency is becoming an important service attribute for large scale distributed systems and networks. Key problems in resiliency quantification are lack of consensus on the definition of resiliency and systematic approach to quantify system resiliency. In general, resiliency is defined as the ability of (system/person/organization) to recover/defy/resist from any shock, insult, or disturbance [1]. Many researchers interpret resiliency as a synonym for fault-tolerance and reliability/availability. However, effect of failure/repair on systems is already covered by reliability/availability measures and that of on individual jobs is well covered under the umbrella of performability [2] and task completion time analysis [3]. We use Laprie [4] and Simoncini [5]'s definition in which resiliency is the persistence of service delivery that can justifiably be trusted, when facing changes. The changes we are referring to here are beyond the envelope of system configurations already considered during system design, that is, beyond fault tolerance. In this paper, we outline a general approach for system resiliency quantification. Using examples of non-state-space and state-space stochastic models, we analytically–numerically quantify the resiliency of system performance, reliability, availability and performability measures w.r.t. structural and parametric changes

  19. Commercial microwave space power

    International Nuclear Information System (INIS)

    Siambis, J.; Gregorwich, W.; Walmsley, S.; Shockey, K.; Chang, K.

    1991-01-01

    This paper reports on central commercial space power, generating power via large scale solar arrays, and distributing power to satellites via docking, tethering or beamed power such as microwave or laser beams, that is being investigated as a potentially advantageous alternative to present day technology where each satellite carries its own power generating capability. The cost, size and weight for electrical power service, together with overall mission requirements and flexibility are the principal selection criteria, with the case of standard solar array panels based on the satellite, as the reference point. This paper presents and investigates a current technology design point for beamed microwave commercial space power. The design point requires that 25 kW be delivered to the user load with 30% overall system efficiency. The key elements of the design point are: An efficient rectenna at the user end; a high gain, low beam width, efficient antenna at the central space power station end, a reliable and efficient cw microwave tube. Design trades to optimize the proposed near term design point and to explore characteristics of future systems were performed. Future development for making the beamed microwave space power approach more competitive against docking and tethering are discussed

  20. Using the landform tool to calculate landforms for hydrogeomorphic wetland classification at a country-wide scale

    CSIR Research Space (South Africa)

    Van Deventer, Heidi

    2014-11-01

    Full Text Available Hydrogeomorphic approaches to wetland classification use landform classes to distinguish wetland functionality at a regional scale. Space-borne radar technology enabled faster regional surveying of surface elevations to digital elevation models...

  1. Researching on Hawking Effect in a Kerr Space Time via Open Quantum System Approach

    International Nuclear Information System (INIS)

    Liu, Wen-Biao; Liu, Xian-Ming

    2014-01-01

    It has been proposed that Hawking radiation from a Schwarzschild or a de Sitter spacetime can be understood as the manifestation of thermalization phenomena in the framework of an open quantum system. Through examining the time evolution of a detector interacting with vacuum massless scalar fields, it is found that the detector would spontaneously excite with a probability the same as the thermal radiation at Hawking temperature. Following the proposals, the Hawking effect in a Kerr space time is investigated in the framework of an open quantum systems. It is shown that Hawking effect of the Kerr space time can also be understood as the the manifestation of thermalization phenomena via open quantum system approach. Furthermore, it is found that near horizon local conformal symmetry plays the key role in the quantum effect of the Kerr space time

  2. "Non-cold" dark matter at small scales: a general approach

    Science.gov (United States)

    Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.

    2017-11-01

    Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.

  3. A New Approach to Space Situational Awareness using Small Ground-Based Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Cliff S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    This report discusses a new SSA approach evaluated by Pacific Northwest National Laboratory (PNNL) that may lead to highly scalable, small telescope observing stations designed to help manage the growing space surveillance burden. Using the methods and observing tools described in this report, the team was able to acquire and track very faint satellites (near Pluto’s apparent brightness). Photometric data was collected and used to correlate object orbital position as a function of atomic clock-derived time. Object apparent brightness was estimated by image analysis and nearby star calibration. The measurement performance was only limited by weather conditions, object brightness, and the sky glow at the observation site. In the future, these new SSA technologies and techniques may be utilized to protect satellite assets, detect and monitor orbiting debris fields, and support Outer Space Treaty monitoring and transparency.

  4. A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.

    2018-04-01

    We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.

  5. Bantam: A Systematic Approach to Reusable Launch Vehicle Technology Development

    Science.gov (United States)

    Griner, Carolyn; Lyles, Garry

    1999-01-01

    The Bantam technology project is focused on providing a low cost launch capability for very small (100 kilogram) NASA and University science payloads. The cost goal has been set at one million dollars per launch. The Bantam project, however, represents much more than a small payload launch capability. Bantam represents a unique, systematic approach to reusable launch vehicle technology development. This technology maturation approach will enable future highly reusable launch concepts in any payload class. These launch vehicle concepts of the future could deliver payloads for hundreds of dollars per pound, enabling dramatic growth in civil and commercial space enterprise. The National Aeronautics and Space Administration (NASA) has demonstrated a better, faster, and cheaper approach to science discovery in recent years. This approach is exemplified by the successful Mars Exploration Program lead by the Jet Propulsion Laboratory (JPL) for the NASA Space Science Enterprise. The Bantam project represents an approach to space transportation technology maturation that is very similar to the Mars Exploration Program. The NASA Advanced Space Transportation Program (ASTP) and Future X Pathfinder Program will combine to systematically mature reusable space transportation technology from low technology readiness to system level flight demonstration. New reusable space transportation capability will be demonstrated at a small (Bantam) scale approximately every two years. Each flight demonstration will build on the knowledge derived from the previous flight tests. The Bantam scale flight demonstrations will begin with the flights of the X-34. The X-34 will demonstrate reusable launch vehicle technologies including; flight regimes up to Mach 8 and 250,000 feet, autonomous flight operations, all weather operations, twenty-five flights in one year with a surge capability of two flights in less than twenty-four hours and safe abort. The Bantam project will build on this initial

  6. A GOCE-only global gravity field model by the space-wise approach

    DEFF Research Database (Denmark)

    Migliaccio, Frederica; Reguzzoni, Mirko; Gatti, Andrea

    2011-01-01

    The global gravity field model computed by the spacewise approach is one of three official solutions delivered by ESA from the analysis of the GOCE data. The model consists of a set of spherical harmonic coefficients and the corresponding error covariance matrix. The main idea behind this approach...... the orbit to reduce the noise variance and correlation before gridding the data. In the first release of the space-wise approach, based on a period of about two months, some prior information coming from existing gravity field models entered into the solution especially at low degrees and low orders...... degrees; the second is an internally computed GOCE-only prior model to be used in place of the official quick-look model, thus removing the dependency on EIGEN5C especially in the polar gaps. Once the procedure to obtain a GOCE-only solution has been outlined, a new global gravity field model has been...

  7. Urban green spaces assessment approach to health, safety and environment

    Directory of Open Access Journals (Sweden)

    B. Akbari Neisiani

    2016-04-01

    Full Text Available The city is alive with dynamic systems, where parks and urban green spaces have high strategic importance which help to improve living conditions. Urban parks are used as visual landscape with so many benefits such as reducing stress, reducing air pollution and producing oxygen, creating opportunities for people to participate in physical activities, optimal environment for children and decreasing noise pollution. The importance of parks is such extent that are discussed as an indicator of urban development. Hereupon the design and maintenance of urban green spaces requires integrated management system based on international standards of health, safety and the environment. In this study, Nezami Ganjavi Park (District 6 of Tehran with the approach to integrated management systems have been analyzed. In order to identify the status of the park in terms of the requirements of the management system based on previous studies and all Tehran Municipality’s considerations, a check list has been prepared and completed by park survey and interview with green space experts. The results showed that the utility of health indicators were 92.33 % (the highest and environmental and safety indicators were 72 %, 84 % respectively. According to SWOT analysis in Nezami Ganjavi Park some of strength points are fire extinguishers, first aid box, annual testing of drinking water and important weakness is using unseparated trash bins also as an opportunities, there are some interesting factors for children and parents to spend free times. Finally, the most important threat is unsuitable park facilities for disabled.

  8. A multifractal approach to space-filling recovery for PET quantification

    Energy Technology Data Exchange (ETDEWEB)

    Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O. [Comprehensive Cancer Imaging Centre, Imperial College London, Hammersmith Hospital, London W12 0NN (United Kingdom); Tsoumpas, Charalampos [Division of Medical Physics, University of Leeds, LS2 9JT (United Kingdom); Turkheimer, Federico E. [Department of Neuroimaging, Institute of Psychiatry, King’s College London, London SE5 8AF (United Kingdom)

    2014-11-01

    Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.

  9. Prediction and verification of centrifugal dewatering of P. pastoris fermentation cultures using an ultra scale-down approach.

    Science.gov (United States)

    Lopes, A G; Keshavarz-Moore, E

    2012-08-01

    Recent years have seen a dramatic rise in fermentation broth cell densities and a shift to extracellular product expression in microbial cells. As a result, dewatering characteristics during cell separation is of importance, as any liquor trapped in the sediment results in loss of product, and thus a decrease in product recovery. In this study, an ultra scale-down (USD) approach was developed to enable the rapid assessment of dewatering performance of pilot-scale centrifuges with intermittent solids discharge. The results were then verified at scale for two types of pilot-scale centrifuges: a tubular bowl equipment and a disk-stack centrifuge. Initial experiments showed that employing a laboratory-scale centrifugal mimic based on using a comparable feed concentration to that of the pilot-scale centrifuge, does not successfully predict the dewatering performance at scale (P-value centrifuge. Initial experiments used Baker's yeast feed suspensions followed by fresh Pichia pastoris fermentation cultures. This work presents a simple and novel USD approach to predict dewatering levels in two types of pilot-scale centrifuges using small quantities of feedstock (centrifuge needs to be operated, reducing the need for repeated pilot-scale runs during early stages of process development. Copyright © 2012 Wiley Periodicals, Inc.

  10. Uniform color space analysis of LACIE image products

    Science.gov (United States)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  11. Space-time trajectories of wind power generation: Parameterized precision matrices under a Gaussian copula approach

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    2015-01-01

    -correlations. Estimation is performed in a maximum likelihood framework. Based on a test case application in Denmark, with spatial dependencies over 15 areas and temporal ones for 43 hourly lead times (hence, for a dimension of n = 645), it is shown that accounting for space-time effects is crucial for generating skilful......Emphasis is placed on generating space-time trajectories of wind power generation, consisting of paths sampled from high-dimensional joint predictive densities, describing wind power generation at a number of contiguous locations and successive lead times. A modelling approach taking advantage...

  12. Mentoring SFRM: A New Approach to International Space Station Flight Controller Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2008-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (operator) to a basic level of effectiveness in 1 year. SFRM training uses a two-pronged approach to expediting operator certification: 1) imbed SFRM skills training into all operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills. Methods: A mentor works with an operator throughout the training flow. Inserted into the training flow are guided-discussion sessions and on-the-job observation opportunities focusing on specific SFRM skills, including: situational leadership, conflict management, stress management, cross-cultural awareness, self care and team care while on-console, communication, workload management, and situation awareness. The mentor and operator discuss the science and art behind the skills, cultural effects on skills applications, recognition of good and bad skills applications, recognition of how skills application changes subtly in different situations, and individual goals and techniques for improving skills. Discussion: This mentoring program provides an additional means of transferring SFRM knowledge compared to traditional CRM training programs. Our future endeavors in training SFRM skills (as well as other organization s) may benefit from adding team performance skills mentoring. This paper

  13. Phase-space densities and effects of resonance decays in a hydrodynamic approach to heavy ion collisions

    International Nuclear Information System (INIS)

    Akkelin, S.V.; Sinyukov, Yu.M.

    2004-01-01

    A method allowing analysis of the overpopulation of phase space in heavy ion collisions in a model-independent way is proposed within the hydrodynamic approach. It makes it possible to extract a chemical potential of thermal pions at freeze-out, irrespective of the form of freeze-out (isothermal) hypersurface in Minkowski space and transverse flows on it. The contributions of resonance (with masses up to 2 GeV) decays to spectra, interferometry volumes, and phase-space densities are calculated and discussed in detail. The estimates of average phase-space densities and chemical potentials of thermal pions are obtained for SPS and RHIC energies. They demonstrate that multibosonic phenomena at those energies might be considered as a correction factor rather than as a significant physical effect. The analysis of the evolution of the pion average phase-space density in chemically frozen hadron systems shows that it is almost constant or slightly increases with time while the particle density and phase-space density at each space point decreases rapidly during the system's expansion. We found that, unlike the particle density, the average phase-space density has no direct link to the freeze-out criterion and final thermodynamic parameters, being connected rather to the initial phase-space density of hadronic matter formed in relativistic nucleus-nucleus collisions

  14. Approaching the new reality. [changes in NASA space programs due to US economy

    Science.gov (United States)

    Diaz, Al V.

    1993-01-01

    The focus on more frequent access to space through smaller, less costly missions, and on NASA's role as a source of technological advance within the U.S. economy is discussed. The Pluto fast flyby mission is examined as an illustration of this approach. Testbeds are to be developed to survive individual programs, becoming permanent facilities, to allow for technological upgrades on an ongoing basis.

  15. Scale-Dependence of Processes Structuring Dung Beetle Metacommunities Using Functional Diversity and Community Deconstruction Approaches

    Science.gov (United States)

    da Silva, Pedro Giovâni; Hernández, Malva Isabel Medina

    2015-01-01

    Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution

  16. Trajectory approach to dissipative quantum phase space dynamics: Application to barrier scattering

    International Nuclear Information System (INIS)

    Hughes, Keith H.; Wyatt, Robert E.

    2004-01-01

    The Caldeira-Leggett master equation, expressed in Lindblad form, has been used in the numerical study of the effect of a thermal environment on the dynamics of the scattering of a wave packet from a repulsive Eckart barrier. The dynamics are studied in terms of phase space trajectories associated with the distribution function, W(q,p,t). The equations of motion for the trajectories include quantum terms that introduce nonlocality into the motion, which imply that an ensemble of correlated trajectories needs to be propagated. However, use of the derivative propagation method (DPM) allows each trajectory to be propagated individually. This is achieved by deriving equations of motion for the partial derivatives of W(q,p,t) that appear in the master equation. The effects of dissipation on the trajectories are studied and results are shown for the transmission probability. On short time scales, decoherence is demonstrated by a swelling of trajectories into momentum space. For a nondissipative system, a comparison is made of the DPM with the 'exact' transmission probability calculated from a fixed grid calculation

  17. Disordering scaling and generalized nearest-neighbor approach in the thermodynamics of Lennard-Jones systems

    International Nuclear Information System (INIS)

    Vorob'ev, V.S.

    2003-01-01

    We suggest a concept of multiple disordering scaling of the crystalline state. Such a scaling procedure applied to a crystal leads to the liquid and (in low density limit) gas states. This approach provides an explanation to a high value of configuration (common) entropy of liquefied noble gases, which can be deduced from experimental data. We use the generalized nearest-neighbor approach to calculate free energy and pressure of the Lennard-Jones systems after performing this scaling procedure. These thermodynamic functions depend on one parameter characterizing the disordering only. Condensed states of the system (liquid and solid) correspond to small values of this parameter. When this parameter tends to unity, we get an asymptotically exact equation of state for a gas involving the second virial coefficient. A reasonable choice of the values for the disordering parameter (ranging between zero and unity) allows us to find the lines of coexistence between different phase states in the Lennard-Jones systems, which are in a good agreement with the available experimental data

  18. Path integral approach for quantum motion on spaces of non-constant curvature according to Koenigs - Three dimensions

    International Nuclear Information System (INIS)

    Grosche, C.

    2007-08-01

    In this contribution a path integral approach for the quantum motion on three-dimensional spaces according to Koenigs, for short''Koenigs-Spaces'', is discussed. Their construction is simple: One takes a Hamiltonian from three-dimensional flat space and divides it by a three-dimensional superintegrable potential. Such superintegrable potentials will be the isotropic singular oscillator, the Holt-potential, the Coulomb potential, or two centrifugal potentials, respectively. In all cases a non-trivial space of non-constant curvature is generated. In order to obtain a proper quantum theory a curvature term has to be incorporated into the quantum Hamiltonian. For possible bound-state solutions we find equations up to twelfth order in the energy E. (orig.)

  19. Decadal opportunities for space architects

    Science.gov (United States)

    Sherwood, Brent

    2012-12-01

    A significant challenge for the new field of space architecture is the dearth of project opportunities. Yet every year more young professionals express interest to enter the field. This paper derives projections that bound the number, type, and range of global development opportunities that may be reasonably expected over the next few decades for human space flight (HSF) systems so those interested in the field can benchmark their goals. Four categories of HSF activity are described: human Exploration of solar system bodies; human Servicing of space-based assets; large-scale development of space Resources; and Breakout of self-sustaining human societies into the solar system. A progressive sequence of capabilities for each category starts with its earliest feasible missions and leads toward its full expression. The four sequences are compared in scale, distance from Earth, and readiness. Scenarios hybridize the most synergistic features from the four sequences for comparison to status quo, government-funded HSF program plans. Finally qualitative, decadal, order-of-magnitude estimates are derived for system development needs, and hence opportunities for space architects. Government investment towards human planetary exploration is the weakest generator of space architecture work. Conversely, the strongest generator is a combination of three market drivers: (1) commercial passenger travel in low Earth orbit; (2) in parallel, government extension of HSF capability to GEO; both followed by (3) scale-up demonstration of end-to-end solar power satellites in GEO. The rich end of this scale affords space architecture opportunities which are more diverse, complex, large-scale, and sociologically challenging than traditional exploration vehicle cabins and habitats.

  20. Space Station Freedom - Configuration management approach to supporting concurrent engineering and total quality management. [for NASA Space Station Freedom Program

    Science.gov (United States)

    Gavert, Raymond B.

    1990-01-01

    Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed.

  1. A primer on Hilbert space theory linear spaces, topological spaces, metric spaces, normed spaces, and topological groups

    CERN Document Server

    Alabiso, Carlo

    2015-01-01

    This book is an introduction to the theory of Hilbert space, a fundamental tool for non-relativistic quantum mechanics. Linear, topological, metric, and normed spaces are all addressed in detail, in a rigorous but reader-friendly fashion. The rationale for an introduction to the theory of Hilbert space, rather than a detailed study of Hilbert space theory itself, resides in the very high mathematical difficulty of even the simplest physical case. Within an ordinary graduate course in physics there is insufficient time to cover the theory of Hilbert spaces and operators, as well as distribution theory, with sufficient mathematical rigor. Compromises must be found between full rigor and practical use of the instruments. The book is based on the author's lessons on functional analysis for graduate students in physics. It will equip the reader to approach Hilbert space and, subsequently, rigged Hilbert space, with a more practical attitude. With respect to the original lectures, the mathematical flavor in all sub...

  2. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  3. A multiple-time-scale approach to the control of ITBs on JET

    International Nuclear Information System (INIS)

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  4. A multiple-time-scale approach to the control of ITBs on JET

    Energy Technology Data Exchange (ETDEWEB)

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  5. Approaches in the determination of plant nutrient uptake and distribution in space flight conditions

    Science.gov (United States)

    Heyenga, A. G.; Forsman, A.; Stodieck, L. S.; Hoehn, A.; Kliss, M.

    2000-01-01

    The effective growth and development of vascular plants rely on the adequate availability of water and nutrients. Inefficiency in either the initial absorption, transportation, or distribution of these elements are factors which impinge on plant structure and metabolic integrity. The potential effect of space flight and microgravity conditions on the efficiency of these processes is unclear. Limitations in the available quantity of space-grown plant material and the sensitivity of routine analytical techniques have made an evaluation of these processes impractical. However, the recent introduction of new plant cultivating methodologies supporting the application of radionuclide elements and subsequent autoradiography techniques provides a highly sensitive investigative approach amenable to space flight studies. Experiments involving the use of gel based 'nutrient packs' and the radionuclides calcium-45 and iron-59 were conducted on the Shuttle mission STS-94. Uptake rates of the radionuclides between ground and flight plant material appeared comparable.

  6. Space base laser torque applied on LEO satellites of various geometries at satellite’s closest approach

    Directory of Open Access Journals (Sweden)

    N.S. Khalifa

    2013-12-01

    Full Text Available In light of using laser power in space applications, the motivation of this paper is to use a space based solar pumped laser to produce a torque on LEO satellites of various shapes. It is assumed that there is a space station that fires laser beam toward the satellite so the beam spreading due to diffraction is considered to be the dominant effect on the laser beam propagation. The laser torque is calculated at the point of closest approach between the space station and some sun synchronous low Earth orbit cubesats. The numerical application shows that space based laser torque has a significant contribution on the LEO cubesats. It has a maximum value in the order of 10−8 Nm which is comparable with the residual magnetic moment. However, it has a minimum value in the order 10−11 Nm which is comparable with the aerodynamic and gravity gradient torque. Consequently, space based laser torque can be used as an active attitude control system.

  7. Hybrid Enhanced Epidermal SpaceSuit Design Approaches

    Science.gov (United States)

    Jessup, Joseph M.

    A Space suit that does not rely on gas pressurization is a multi-faceted problem that requires major stability controls to be incorporated during design and construction. The concept of Hybrid Epidermal Enhancement space suit integrates evolved human anthropomorphic and physiological adaptations into its functionality, using commercially available bio-medical technologies to address shortcomings of conventional gas pressure suits, and the impracticalities of MCP suits. The prototype HEE Space Suit explored integumentary homeostasis, thermal control and mobility using advanced bio-medical materials technology and construction concepts. The goal was a space suit that functions as an enhanced, multi-functional bio-mimic of the human epidermal layer that works in attunement with the wearer rather than as a separate system. In addressing human physiological requirements for design and construction of the HEE suit, testing regimes were devised and integrated into the prototype which was then subject to a series of detailed tests using both anatomical reproduction methods and human subject.

  8. A perturbative approach to the redshift space power spectrum: beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2016-08-01

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shown to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.

  9. [Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].

    Science.gov (United States)

    Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin

    2017-07-01

    In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.

  10. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  11. A watershed-scale goals approach to assessing and funding wastewater infrastructure.

    Science.gov (United States)

    Rahm, Brian G; Vedachalam, Sridhar; Shen, Jerry; Woodbury, Peter B; Riha, Susan J

    2013-11-15

    Capital needs during the next twenty years for public wastewater treatment, piping, combined sewer overflow correction, and storm-water management are estimated to be approximately $300 billion for the USA. Financing these needs is a significant challenge, as Federal funding for the Clean Water Act has been reduced by 70% during the last twenty years. There is an urgent need for new approaches to assist states and other decision makers to prioritize wastewater maintenance and improvements. We present a methodology for performing an integrated quantitative watershed-scale goals assessment for sustaining wastewater infrastructure. We applied this methodology to ten watersheds of the Hudson-Mohawk basin in New York State, USA that together are home to more than 2.7 million people, cover 3.5 million hectares, and contain more than 36,000 km of streams. We assembled data on 183 POTWs treating approximately 1.5 million m(3) of wastewater per day. For each watershed, we analyzed eight metrics: Growth Capacity, Capacity Density, Soil Suitability, Violations, Tributary Length Impacted, Tributary Capital Cost, Volume Capital Cost, and Population Capital Cost. These metrics were integrated into three goals for watershed-scale management: Tributary Protection, Urban Development, and Urban-Rural Integration. Our results demonstrate that the methodology can be implemented using widely available data, although some verification of data is required. Furthermore, we demonstrate substantial differences in character, need, and the appropriateness of different management strategies among the ten watersheds. These results suggest that it is feasible to perform watershed-scale goals assessment to augment existing approaches to wastewater infrastructure analysis and planning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  13. Hybrid approaches to nanometer-scale patterning: Exploiting tailored intermolecular interactions

    International Nuclear Information System (INIS)

    Mullen, Thomas J.; Srinivasan, Charan; Shuster, Mitchell J.; Horn, Mark W.; Andrews, Anne M.; Weiss, Paul S.

    2008-01-01

    In this perspective, we explore hybrid approaches to nanometer-scale patterning, where the precision of molecular self-assembly is combined with the sophistication and fidelity of lithography. Two areas - improving existing lithographic techniques through self-assembly and fabricating chemically patterned surfaces - will be discussed in terms of their advantages, limitations, applications, and future outlook. The creation of such chemical patterns enables new capabilities, including the assembly of biospecific surfaces to be recognized by, and to capture analytes from, complex mixtures. Finally, we speculate on the potential impact and upcoming challenges of these hybrid strategies.

  14. An abstract approach to music.

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.; Tipei, S.

    1999-04-19

    In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize a composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.

  15. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    Science.gov (United States)

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  16. Urban Space as the Commons - New Modes for Urban Space Management

    Science.gov (United States)

    Ondrejicka, Vladimir; Finka, Maros; Husar, Milan; Jamecny, Lubomir

    2017-12-01

    The significant growing of urban population, globalization of social-ecological systems, fuzzification of spatial structures, the diversity of actors in spatial development, their power and interest in using the resources including space, especially in high-density urban areas. Spatial development is connected with a high concentration of economic activities and population in urban systems. In many cases very rapid processes of urbanization and suburbanization approach natural spatial/territorial limits, such as carrying capacity of land, transport and infrastructural systems, absorption capacities of recipients and others [1]. Growing shortage of space and problems in their accessibility (physical, functional, etc.) leads to growing tension and conflicts among the actors/users of urban spaces and represent the initial phase of space deprivations processes. There is a parallel with “tragedy of commons” as defined by Hardin [2] and was reinterpreted by many other academics and researchers. Urban space can be clearly interpreted as the commons or commons good for their community of users and relevant actors, so innovative governance modes overlapping defined “tragedy of commons” representing a possible approach for a new concept of urban public spaces management. This paper presents a possible new approach to the management of urban spaces reflecting the current challenges in spatial development based on the theory of commons and innovative governance modes. The new approach is built on innovations in institutional regimes, the algorithm of decision-making and economic expression and interpretation of quality of the space. The theory of the commons as the base source for this approach has been broadly proved in practice and Elinor Ostrom as the author of this theory [3-5] was awarded by Nobel Prize in 2009.

  17. Simple scaling for faster tracking simulation in accelerator multiparticle dynamics

    International Nuclear Information System (INIS)

    MacLachlan, J.A.

    2001-01-01

    Macroparticle tracking is a direct and attractive approach to following the evolution of a phase space distribution. When the particles interact through short range wake fields or when inter-particle force is included, calculations of this kind require a large number of macroparticles. It is possible to reduce both the number of macroparticles required and the number of tracking steps per unit simulated time by employing a simple scaling which can be inferred directly from the single-particle equations of motion. In many cases of practical importance the speed of calculation improves with the fourth power of the scaling constant. Scaling has been implemented in an existing longitudinal tracking code; early experience supports the concept and promises major time savings. Limitations on the scaling are discussed

  18. Building HIA approaches into strategies for green space use: an example from Plymouth's (UK) Stepping Stones to Nature project.

    Science.gov (United States)

    Richardson, J; Goss, Z; Pratt, A; Sharman, J; Tighe, M

    2013-12-01

    The health and well-being benefits of access to green space are well documented. Research suggests positive findings regardless of social group, however barriers exist that limit access to green space, including proximity, geography and differing social conditions. Current public health policy aims to broaden the range of environmental public health interventions through effective partnership working, providing opportunities to work across agencies to promote the use of green space. Health Impact Assessment (HIA) is a combination of methods and procedures to assess the potential health and well-being impacts of policies, developments and projects. It provides a means by which negative impacts can be mitigated and positive impacts can be enhanced, and has potential application for assessing green space use. This paper describes the application of a HIA approach to a multi-agency project (Stepping Stones to Nature--SS2N) in the UK designed to improve local green spaces and facilitate green space use in areas classified as having high levels of deprivation. The findings suggest that the SS2N project had the potential to provide significant positive benefits in the areas of physical activity, mental and social well-being. Specific findings for one locality identified a range of actions that could be taken to enhance benefits, and mitigate negative factors such as anti-social behaviour. The HIA approach proved to be a valuable process through which impacts of a community development/public health project could be enhanced and negative impacts prevented at an early stage; it illustrates how a HIA approach could enhance multi-agency working to promote health and well-being in communities.

  19. Wilson flow and scale setting from lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bornyakov, V.G. [Institute for High Energy Physics, Protvino (Russian Federation); Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Far Eastern Federal Univ., Vladivostok (Russian Federation). School of Biomedicine; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Hudspith, R. [York Univ., Toronto, ON (Canada). Dept. of Mathematics and Statistics; Collaboration: QCDSF-UKQCD Collaboration; and others

    2015-08-15

    We give a determination of the phenomenological value of the Wilson (or gradient) flow scales t{sub 0} and w{sub 0} for 2+1 flavours of dynamical quarks. The simulations are performed keeping the average quark mass constant, which allows the approach to the physical point to be made in a controlled manner. O(a) improved clover fermions are used and together with four lattice spacings this allows the continuum extrapolation to be taken.

  20. A Multi-scale, Multi-disciplinary Approach for Assessing the Technological, Economic, and Environmental Performance of Bio-based Chemicals

    DEFF Research Database (Denmark)

    Herrgard, Markus; Sukumara, Sumesh; Campodonico Alt, Miguel Angel

    2015-01-01

    , the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain...... towards a sustainable chemical industry....

  1. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  2. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  3. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  4. Passive Plasma Contact Mechanisms for Small-Scale Spacecraft

    Science.gov (United States)

    McTernan, Jesse K.

    Small-scale spacecraft represent a paradigm shift in how entities such as academia, industry, engineering firms, and the scientific community operate in space. However, although the paradigm shift produces unique opportunities to build satellites in unique ways for novel missions, there are also significant challenges that must be addressed. This research addresses two of the challenges associated with small-scale spacecraft: 1) the miniaturization of spacecraft and associated instrumentation and 2) the need to transport charge across the spacecraft-environment boundary. As spacecraft decrease in size, constraints on the size, weight, and power of on-board instrumentation increase--potentially limiting the instrument's functionality or ability to integrate with the spacecraft. These constraints drive research into mechanisms or techniques that use little or no power and efficiently utilize existing resources. One limited resource on small-scale spacecraft is outer surface area, which is often covered with solar panels to meet tight power budgets. This same surface area could also be needed for passive neutralization of spacecraft charging. This research explores the use of a transparent, conductive layer on the solar cell coverglass that is electrically connected to spacecraft ground potential. This dual-purpose material facilitates the use of outer surfaces for both energy harvesting of solar photons as well as passive ion collection. Mission capabilities such as in-situ plasma measurements that were previously infeasible on small-scale platforms become feasible with the use of indium tin oxide-coated solar panel coverglass. We developed test facilities that simulate the space environment in low Earth orbit to test the dual-purpose material and the various application of this approach. Particularly, this research is in support of two upcoming missions: OSIRIS-3U, by Penn State's Student Space Programs Lab, and MiTEE, by the University of Michigan. The purpose of

  5. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  6. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  7. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  8. Technology Development and Demonstration Concepts for the Space Elevator

    Science.gov (United States)

    Smitherman, David V., Jr.

    2004-01-01

    During the 1990s several discoveries and advances in the development of carbon nano-tube (CNT) materials indicated that material strengths many times greater than common high-strength composite materials might be possible. Progress in the development of this material led to renewed interest in the space elevator concept for construction of a tether structure from the surface of the Earth through a geostationary orbit (GEO) and thus creating a new approach to Earth-to-orbit transportation infrastructures. To investigate this possibility the author, in 1999, managed for NASA a space elevator work:hop at the Marshall Space Flight Center to explore the potential feasibility of space elevators in the 21 century, and to identify the critical technologies and demonstration missions needed to make development of space elevators feasible. Since that time, a NASA Institute for Advanced Concepts (NIAC) funded study of the Space Elevator proposed a concept for a simpler first space elevator system using more near-term technologies. This paper will review some of the latest ideas for space elevator development, the critical technologies required, and some of the ideas proposed for demonstrating the feasibility for full-scale development of an Earth to GEO space elevator. Critical technologies include CNT composite materials, wireless power transmission, orbital object avoidance, and large-scale tether deployment and control systems. Numerous paths for technology demonstrations have been proposed utilizing ground experiments, air structures. LEO missions, the space shuttle, the international Space Station, GEO demonstration missions, demonstrations at the lunar L1 or L2 points, and other locations. In conclusion, this paper finds that the most critical technologies for an Earth to GEO space elevator include CNT composite materials development and object avoidance technologies; that lack of successful development of these technologies need not preclude continued development of

  9. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    Science.gov (United States)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems

  10. A new approach to reduce uncertainties in space radiation cancer risk predictions.

    Directory of Open Access Journals (Sweden)

    Francis A Cucinotta

    Full Text Available The prediction of space radiation induced cancer risk carries large uncertainties with two of the largest uncertainties being radiation quality and dose-rate effects. In risk models the ratio of the quality factor (QF to the dose and dose-rate reduction effectiveness factor (DDREF parameter is used to scale organ doses for cosmic ray proton and high charge and energy (HZE particles to a hazard rate for γ-rays derived from human epidemiology data. In previous work, particle track structure concepts were used to formulate a space radiation QF function that is dependent on particle charge number Z, and kinetic energy per atomic mass unit, E. QF uncertainties where represented by subjective probability distribution functions (PDF for the three QF parameters that described its maximum value and shape parameters for Z and E dependences. Here I report on an analysis of a maximum QF parameter and its uncertainty using mouse tumor induction data. Because experimental data for risks at low doses of γ-rays are highly uncertain which impacts estimates of maximum values of relative biological effectiveness (RBEmax, I developed an alternate QF model, denoted QFγAcute where QFs are defined relative to higher acute γ-ray doses (0.5 to 3 Gy. The alternate model reduces the dependence of risk projections on the DDREF, however a DDREF is still needed for risk estimates for high-energy protons and other primary or secondary sparsely ionizing space radiation components. Risk projections (upper confidence levels (CL for space missions show a reduction of about 40% (CL∼50% using the QFγAcute model compared the QFs based on RBEmax and about 25% (CL∼35% compared to previous estimates. In addition, I discuss how a possible qualitative difference leading to increased tumor lethality for HZE particles compared to low LET radiation and background tumors remains a large uncertainty in risk estimates.

  11. Concept of Draft International Standard for a Unified Approach to Space Program Quality Assurance

    Science.gov (United States)

    Stryzhak, Y.; Vasilina, V.; Kurbatov, V.

    2002-01-01

    For want of the unified approach to guaranteed space project and product quality assurance, implementation of many international space programs has become a challenge. Globalization of aerospace industry and participation of various international ventures with diverse quality assurance requirements in big international space programs requires for urgent generation of unified international standards related to this field. To ensure successful fulfillment of space missions, aerospace companies should design and process reliable and safe products with properties complying or bettering User's (or Customer's) requirements. Quality of the products designed or processed by subcontractors (or other suppliers) should also be in compliance with the main user (customer)'s requirements. Implementation of this involved set of unified requirements will be made possible by creating and approving a system (series) of international standards under a generic title Space Product Quality Assurance based on a system consensus principle. Conceptual features of the baseline standard in this system (series) should comprise: - Procedures for ISO 9000, CEN and ECSS requirements adaptation and introduction into space product creation, design, manufacture, testing and operation; - Procedures for quality assurance at initial (design) phases of space programs, with a decision on the end product made based on the principle of independence; - Procedures to arrange incoming inspection of products delivered by subcontractors (including testing, audit of supplier's procedures, review of supplier's documentation), and space product certification; - Procedures to identify materials and primary products applied; - Procedures for quality system audit at the component part, primary product and materials supplier facilities; - Unified procedures to form a list of basic performances to be under configuration management; - Unified procedures to form a list of critical space product components, and unified

  12. Allocating city space to multiple transportation modes: A new modeling approach consistent with the physics of transport

    OpenAIRE

    Gonzales, Eric J.; Geroliminis, Nikolas; Cassidy, Michael J.; Daganzo, Carlos F.

    2008-01-01

    A macroscopic modeling approach is proposed for allocating a city’s road space among competing transport modes. In this approach, a city or neighborhood street network is viewed as a reservoir with aggregated traffic. Taking the number of vehicles (accumulation) in a reservoir as input, we show how one can reliably predict system performance in terms of person and vehicle hours spent in the system and person and vehicle kilometers traveled. The approach is used here to unveil two important ...

  13. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-04-30

    volumes interactively. We focus on two aspects: (1) Segmented objects are often toggled on and off by an interactive query, which makes it unfeasible to pre-compute a well-adapted space subdivision. (2) To scale to large data, culling and empty-space skipping must scale with the output size instead of the input volume. Our approach combines the advantages of object- and image-order stages of the empty-space skipping process.

  14. Scalable space-time adaptive simulation tools for computational electrocardiology

    OpenAIRE

    Krause, Dorian; Krause, Rolf

    2013-01-01

    This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...

  15. Space Station overall management approach for operations

    Science.gov (United States)

    Paules, G.

    1986-01-01

    An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined.

  16. A Psychosocial Approach to Understanding Underground Spaces

    Directory of Open Access Journals (Sweden)

    Eun H. Lee

    2017-03-01

    Full Text Available With a growing need for usable land in urban areas, subterranean development has been gaining attention. While construction of large underground complexes is not a new concept, our understanding of various socio-cultural aspects of staying underground is still at a premature stage. With projected emergence of underground built environments, future populations may spend much more of their working, transit, and recreational time in underground spaces. Therefore, it is essential to understand the challenges and advantages that such environments have to improve the future welfare of users of underground spaces. The current paper discusses various psycho-social aspects of underground spaces, the impact they can have on the culture shared among the occupants, and possible solutions to overcome some of these challenges.

  17. COGNITIVE APPROACH TO THE STEREOTYPICAL PLACEMENT OF WOMEN IN VISUAL ADVERTISING SPACE

    Directory of Open Access Journals (Sweden)

    Simona Amankevičiūtė

    2013-10-01

    Full Text Available This article conceptualizes the image of women in the sexist advertisements of the 1950s and 60s and in current advertising discourse by combining the research traditions of both cognitive linguistics and semiotic image analysis. The aim of the research is to try to evaluate how canonical positionings of women in the hyperreality of advertisements may slip into everyday discourse (stereotype space and to present an interpretation of the creators’ visual lexicon. It is presumed that the traditional (formed by feminist linguists approach to sexist advertising as an expression of an androcentric worldview in culture may be considered too subjectively critical. This study complements an interpretation of women’s social roles in advertising with cognitive linguistic insights on the subject’s (woman’s visualisation and positioning in ad space. The article briefly overviews the feminist approach to women’s place in public discourse, and discusses the relevance of Goffman’s Gender Studies to an investigation of women’s images in advertising. The scholar’s contribution to adapting cognitive frame theory for an investigation of visuals in advertising is also discussed. The analysed ads were divided into three groups by Goffman’s classification, according to the concrete visuals used to represent women’s bodies or parts thereof: dismemberment, commodification, and subordination ritual. The classified stereotypical images of women’s bodies are discussed as visual metonymy, visual metaphor, and image schemas.

  18. Space and energy. [space systems for energy generation, distribution and control

    Science.gov (United States)

    Bekey, I.

    1976-01-01

    Potential contributions of space to energy-related activities are discussed. Advanced concepts presented include worldwide energy distribution to substation-sized users using low-altitude space reflectors; powering large numbers of large aircraft worldwide using laser beams reflected from space mirror complexes; providing night illumination via sunlight-reflecting space mirrors; fine-scale power programming and monitoring in transmission networks by monitoring millions of network points from space; prevention of undetected hijacking of nuclear reactor fuels by space tracking of signals from tagging transmitters on all such materials; and disposal of nuclear power plant radioactive wastes in space.

  19. Elements of a pragmatic approach for dealing with bias and uncertainty in experiments through predictions : experiment design and data conditioning; %22real space%22 model validation and conditioning; hierarchical modeling and extrapolative prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose

    2011-11-01

    This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.

  20. New directions for space solar power

    Science.gov (United States)

    Mankins, John C.

    2009-07-01

    Several of the central issues associated with the eventual realization of the vision of solar power from space for terrestrial markets resolve around the expect costs associated with the assembly, inspection, maintenance and repair of future solar power satellite (SPS) stations. In past studies (for example, NASA's "Fresh Look Study", c. 1995-1997) efforts were made to reduce both the scale and mass of large, systems-level interfaces (e.g., the power management and distribution (PMAD) system) and on-orbit fixed infrastructures through the use of modular systems strategies. These efforts have had mixed success (as reflected in the projected on-orbit mass of various systems concepts. However, the author remains convinced of the importance of modular strategies for exceptionally large space systems in eventually realizing the vision of power from space. This paper will introduce some of the key issues associated with cost-competitive space solar power in terrestrial markets. It will examine some of the relevant SPS concepts and will assess the 'pros and cons' of each in terms of space assembly, maintenance and servicing (SAMS) requirements. The paper discusses at a high level some relevant concepts and technologies that may play r role in the eventual, successful resolution of these challenges. The paper concludes with an example of the kind of novel architectural approach for space solar power that is needed.

  1. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    Science.gov (United States)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  2. The micro-environmental impact of volatile organic compound emissions from large-scale assemblies of people in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, Tanushree [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of); Kim, Ki-Hyun, E-mail: kkim61@hanyang.ac.kr [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of); Uchimiya, Minori [USDA-ARS Southern Regional Research Center, 1100 Robert E. Lee Boulevard, New Orleans, LA 70124 (United States); Kumar, Pawan [Department of Chemical Engineering, Indian Institute of Technology, Hauz Khas, New Delhi 11016 (India); Das, Subhasish; Bhattacharya, Satya Sundar [Soil & Agro-Bioengineering Lab, Department of Environmental Science, Tezpur University, Napaam 784028 (India); Szulejko, Jan [Department of Civil & Environmental Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 04763 (Korea, Republic of)

    2016-11-15

    Large-scale assemblies of people in a confined space can exert significant impacts on the local air chemistry due to human emissions of volatile organics. Variations of air-quality in such small scale can be studied by quantifying fingerprint volatile organic compounds (VOCs) such as acetone, toluene, and isoprene produced during concerts, movie screenings, and sport events (like the Olympics and the World Cup). This review summarizes the extent of VOC accumulation resulting from a large population in a confined area or in a small open area during sporting and other recreational activities. Apart from VOCs emitted directly from human bodies (e.g., perspiration and exhaled breath), those released indirectly from other related sources (e.g., smoking, waste disposal, discharge of food-waste, and use of personal-care products) are also discussed. Although direct and indirect emissions of VOCs from human may constitute <1% of the global atmospheric VOCs budget, unique spatiotemporal variations in VOCs species within a confined space can have unforeseen impacts on the local atmosphere to lead to acute human exposure to harmful pollutants.

  3. The micro-environmental impact of volatile organic compound emissions from large-scale assemblies of people in a confined space

    International Nuclear Information System (INIS)

    Dutta, Tanushree; Kim, Ki-Hyun; Uchimiya, Minori; Kumar, Pawan; Das, Subhasish; Bhattacharya, Satya Sundar; Szulejko, Jan

    2016-01-01

    Large-scale assemblies of people in a confined space can exert significant impacts on the local air chemistry due to human emissions of volatile organics. Variations of air-quality in such small scale can be studied by quantifying fingerprint volatile organic compounds (VOCs) such as acetone, toluene, and isoprene produced during concerts, movie screenings, and sport events (like the Olympics and the World Cup). This review summarizes the extent of VOC accumulation resulting from a large population in a confined area or in a small open area during sporting and other recreational activities. Apart from VOCs emitted directly from human bodies (e.g., perspiration and exhaled breath), those released indirectly from other related sources (e.g., smoking, waste disposal, discharge of food-waste, and use of personal-care products) are also discussed. Although direct and indirect emissions of VOCs from human may constitute <1% of the global atmospheric VOCs budget, unique spatiotemporal variations in VOCs species within a confined space can have unforeseen impacts on the local atmosphere to lead to acute human exposure to harmful pollutants.

  4. Building a quality culture in the Office of Space Flight: Approach, lessons learned and implications for the future

    Science.gov (United States)

    Roberts, C. Shannon

    1992-01-01

    The purpose of this paper is to describe the approach and lessons learned by the Office of Space Flight (OSF), National Aeronautics and Space Administration (NASA), in its introduction of quality. In particular, the experience of OSF Headquarters is discussed as an example of an organization within NASA that is considering both the business and human elements of the change and the opportunities the quality focus presents to improve continuously. It is hoped that the insights shared will be of use to those embarking upon similar cultural changes. The paper is presented in the following parts: the leadership challenge; background; context of the approach to quality; initial steps; current initiatives; lessons learned; and implications for the future.

  5. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  6. Microtomography and pore-scale modeling of two-phase Fluid Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Silin, D.; Tomutsa, L.; Benson, S.; Patzek, T.

    2010-10-19

    Synchrotron-based X-ray microtomography (micro CT) at the Advanced Light Source (ALS) line 8.3.2 at the Lawrence Berkeley National Laboratory produces three-dimensional micron-scale-resolution digital images of the pore space of the reservoir rock along with the spacial distribution of the fluids. Pore-scale visualization of carbon dioxide flooding experiments performed at a reservoir pressure demonstrates that the injected gas fills some pores and pore clusters, and entirely bypasses the others. Using 3D digital images of the pore space as input data, the method of maximal inscribed spheres (MIS) predicts two-phase fluid distribution in capillary equilibrium. Verification against the tomography images shows a good agreement between the computed fluid distribution in the pores and the experimental data. The model-predicted capillary pressure curves and tomography-based porosimetry distributions compared favorably with the mercury injection data. Thus, micro CT in combination with modeling based on the MIS is a viable approach to study the pore-scale mechanisms of CO{sub 2} injection into an aquifer, as well as more general multi-phase flows.

  7. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  8. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  9. Observing the Global Water Cycle from Space

    Science.gov (United States)

    Hildebrand, P. H.

    2004-01-01

    This paper presents an approach to measuring all major components of the water cycle from space. Key elements of the global water cycle are discussed in terms of the storage of water-in the ocean, air, cloud and precipitation, in soil, ground water, snow and ice, and in lakes and rivers, and in terms of the global fluxes of water between these reservoirs. Approaches to measuring or otherwise evaluating the global water cycle are presented, and the limitations on known accuracy for many components of the water cycle are discussed, as are the characteristic spatial and temporal scales of the different water cycle components. Using these observational requirements for a global water cycle observing system, an approach to measuring the global water cycle from space is developed. The capabilities of various active and passive microwave instruments are discussed, as is the potential of supporting measurements from other sources. Examples of space observational systems, including TRMM/GPM precipitation measurement, cloud radars, soil moisture, sea surface salinity, temperature and humidity profiling, other measurement approaches and assimilation of the microwave and other data into interpretative computer models are discussed to develop the observational possibilities. The selection of orbits is then addressed, for orbit selection and antenna size/beamwidth considerations determine the sampling characteristics for satellite measurement systems. These considerations dictate a particular set of measurement possibilities, which are then matched to the observational sampling requirements based on the science. The results define a network of satellite instrumentation systems, many in low Earth orbit, a few in geostationary orbit, and all tied together through a sampling network that feeds the observations into a data-assimilative computer model.

  10. A Web Based Approach to Integrate Space Culture and Education

    Science.gov (United States)

    Gerla, F.

    2002-01-01

    , who can use it to prepare their lessons, retrieve information and organize the didactic material in order to support their lessons. We think it important to use a user centered "psychology" based on UM: we have to know the needs and expectations of the students. Our intent is to use usability tests not just to prove the site effectiveness and clearness, but also to investigate aesthetical preferences of children and young people. Physics, mathematics, chemistry are just some of the difficult learning fields connected with space technologies. Space culture is a potentially never-ending field, and our scope will be to lead students by hand in this universe of knowledge. This paper will present MARS activities in the framework of the above methodologies aimed at implementing a web based approach to integrate space culture and education. The activities are already in progress and some results will be presented in the final paper.

  11. The ASLOTS concept: An interactive, adaptive decision support concept for Final Approach Spacing of Aircraft (FASA). FAA-NASA Joint University Program

    Science.gov (United States)

    Simpson, Robert W.

    1993-01-01

    This presentation outlines a concept for an adaptive, interactive decision support system to assist controllers at a busy airport in achieving efficient use of multiple runways. The concept is being implemented as a computer code called FASA (Final Approach Spacing for Aircraft), and will be tested and demonstrated in ATCSIM, a high fidelity simulation of terminal area airspace and airport surface operations. Objectives are: (1) to provide automated cues to assist controllers in the sequencing and spacing of landing and takeoff aircraft; (2) to provide the controller with a limited ability to modify the sequence and spacings between aircraft, and to insert takeoffs and missed approach aircraft in the landing flows; (3) to increase spacing accuracy using more complex and precise separation criteria while reducing controller workload; and (4) achieve higher operational takeoff and landing rates on multiple runways in poor visibility.

  12. A multiscale analytical approach for bone remodeling simulations : linking scales from collagen to trabeculae

    NARCIS (Netherlands)

    Colloca, M.; Blanchard, R.; Hellmich, C.; Ito, K.; Rietbergen, van B.

    2014-01-01

    Bone is a dynamic and hierarchical porous material whose spatial and temporal mechanical properties can vary considerably due to differences in its microstructure and due to remodeling. Hence, a multiscale analytical approach, which combines bone structural information at multiple scales to the

  13. Parallel Auxiliary Space AMG Solver for $H(div)$ Problems

    Energy Technology Data Exchange (ETDEWEB)

    Kolev, Tzanio V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-18

    We present a family of scalable preconditioners for matrices arising in the discretization of $H(div)$ problems using the lowest order Raviart--Thomas finite elements. Our approach belongs to the class of “auxiliary space''--based methods and requires only the finite element stiffness matrix plus some minimal additional discretization information about the topology and orientation of mesh entities. Also, we provide a detailed algebraic description of the theory, parallel implementation, and different variants of this parallel auxiliary space divergence solver (ADS) and discuss its relations to the Hiptmair--Xu (HX) auxiliary space decomposition of $H(div)$ [SIAM J. Numer. Anal., 45 (2007), pp. 2483--2509] and to the auxiliary space Maxwell solver AMS [J. Comput. Math., 27 (2009), pp. 604--623]. Finally, an extensive set of numerical experiments demonstrates the robustness and scalability of our implementation on large-scale $H(div)$ problems with large jumps in the material coefficients.

  14. Scale and legacy controls on catchment nutrient export regimes

    Science.gov (United States)

    Howden, N. J. K.; Burt, T.; Worrall, F.

    2017-12-01

    Nutrient dynamics in river catchments are complex: water and chemical fluxes are highly variable in low-order streams, but this variability declines as fluxes move through higher-order reaches. This poses a major challenge for process understanding as much effort is focussed on long-term monitoring of the main river channel (a high-order reach), and therefore the data available to support process understanding are predominantly derived from sites where much of the transient response of nutrient export is masked by the effect of averaging over both space and time. This may be further exacerbated at all scales by the accumulation of legacy nutrient sources in soils, aquifers and pore waters, where historical activities have led to nutrient accumulation where the catchment system is transport limited. Therefore it is of particular interest to investigate how the variability of nutrient export changes both with catchment scale (from low to high-order catchment streams) and with the presence of legacy sources, such that the context of infrequent monitoring on high-order streams can be better understood. This is not only a question of characterising nutrient export regimes per se, but also developing a more thorough understanding of how the concepts of scale and legacy may modify the statistical characteristics of observed responses across scales in both space and time. In this paper, we use synthetic data series and develop a model approach to consider how space and timescales combine with impacts of legacy sources to influence observed variability in catchment export. We find that: increasing space and timescales tend to reduce the observed variance in nutrient exports, due to an increase in travel times and greater mixing, and therefore averaging, of sources; increasing the influence of legacy sources inflates the variance, with the level of inflation dictated by the residence time of the respective sources.

  15. Things That Squeak and Make You Feel Bad: Building Scalable User Experience Programs for Space Assessment

    Directory of Open Access Journals (Sweden)

    Rebecca Kuglitsch

    2018-04-01

    Full Text Available This article suggests a process for creating a user experience (UX assessment of space program that requires limited resources and minimal prior UX experience. By beginning with small scale methods, like comment boxes and easel prompts, librarians can overturn false assumptions about user behaviors, ground deeper investigations such as focus groups, and generate momentum. At the same time, these methods should feed into larger efforts to build trust and interest with peers and administration, laying the groundwork for more in-depth space UX assessment and more significant changes. The process and approach we suggest can be scaled for use in both large and small library systems. Developing a user experience space assessment program can seem overwhelming, especially without a dedicated user experience librarian or department, but does not have to be. In this piece, we explore how to scale and sequence small UX projects, communicate UX practices and results to stakeholders, and build support in order to develop an intentional but still manageable space assessment program. Our approach takes advantage of our institutional context—a large academic library system with several branch locations, allowing us to pilot projects at different scales. We were able to coordinate across a complex multi-site system, as well as in branch libraries with a staffing model analogous to libraries at smaller institutions. This gives us confidence that our methods can be applied at libraries of different sizes. As subject librarians who served as co-coordinators of a UX team on a voluntary basis, we also confronted the question of how we could attend to user needs while staying on top of our regular workload. Haphazard experimentation is unsatisfying and wasteful, particularly when there is limited time, so we sought to develop a process we could implement that applied approachable, purposeful UX space assessments while building trust and buy-in with colleagues

  16. The applicability of space imagery to the small-scale topographic mapping of developing countries: A case study — the Sudan

    Science.gov (United States)

    Petrie, G.; El Niweiri, A. E. H.

    After reviewing the current status of topographic mapping in Sudan, the paper considers the possible applications of space inagery to the topographic mapping of the country at 1 : 100,000 scale. A comprehensive series of tests of the geometric accuracy and interpretability of six types of space imagery taken by the Landsat MSS, RBV and TM sensors, the MOMS scanner, the ESA Metric Camera and NASA's Large Format Camera have been conducted over a test area established in the Red Sea Hills area of Sudan supplemented by further interpretation tests carried out over the area of Khartoum and the Gezira. The results of these tests are given together with those from comparative tests carried out with other images acquired by the same sensors over test areas in developed countries (UK and USA). Further collateral information on topographic mapping at 1 : 100,000 scale from SPOT imagery has been provided by the Ordnance Survey based on its tests and experience in North Yemen. The paper concludes with an analysis of the possibilities of mapping the main (non-equatorial) area of Sudan at 1 : 100,000 scale based on the results of the extensive series of tests reported in the paper and elsewhere. Consideration is also given to the infrastructure required to support such a programme.

  17. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    Science.gov (United States)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  18. Swamp Works: A New Approach to Develop Space Mining and Resource Extraction Technologies at the National Aeronautics Space Administration (NASA) Kennedy Space Center (KSC)

    Science.gov (United States)

    Mueller, R. P.; Sibille, L.; Leucht, K.; Smith, J. D.; Townsend, I. I.; Nick, A. J.; Schuler, J. M.

    2015-01-01

    environment and methodology, with associated laboratories that uses lean development methods and creativity-enhancing processes to invent and develop new solutions for space exploration. This paper will discuss the Swamp Works approach to developing space mining and resource extraction systems and the vision of space development it serves. The ultimate goal of the Swamp Works is to expand human civilization into the solar system via the use of local resources utilization. By mining and using the local resources in situ, it is conceivable that one day the logistics supply train from Earth can be eliminated and Earth independence of a space-based community will be enabled.

  19. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  20. Space station electrical power distribution analysis using a load flow approach

    Science.gov (United States)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  1. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    Science.gov (United States)

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. EEG source space analysis of the supervised factor analytic approach for the classification of multi-directional arm movement

    Science.gov (United States)

    Shenoy Handiru, Vikram; Vinod, A. P.; Guan, Cuntai

    2017-08-01

    Objective. In electroencephalography (EEG)-based brain-computer interface (BCI) systems for motor control tasks the conventional practice is to decode motor intentions by using scalp EEG. However, scalp EEG only reveals certain limited information about the complex tasks of movement with a higher degree of freedom. Therefore, our objective is to investigate the effectiveness of source-space EEG in extracting relevant features that discriminate arm movement in multiple directions. Approach. We have proposed a novel feature extraction algorithm based on supervised factor analysis that models the data from source-space EEG. To this end, we computed the features from the source dipoles confined to Brodmann areas of interest (BA4a, BA4p and BA6). Further, we embedded class-wise labels of multi-direction (multi-class) source-space EEG to an unsupervised factor analysis to make it into a supervised learning method. Main Results. Our approach provided an average decoding accuracy of 71% for the classification of hand movement in four orthogonal directions, that is significantly higher (>10%) than the classification accuracy obtained using state-of-the-art spatial pattern features in sensor space. Also, the group analysis on the spectral characteristics of source-space EEG indicates that the slow cortical potentials from a set of cortical source dipoles reveal discriminative information regarding the movement parameter, direction. Significance. This study presents evidence that low-frequency components in the source space play an important role in movement kinematics, and thus it may lead to new strategies for BCI-based neurorehabilitation.

  3. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy

    2012-03-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth. © 2011 The Korean Statistical Society.

  4. Short-term wind speed prediction using an unscented Kalman filter based state-space support vector regression approach

    International Nuclear Information System (INIS)

    Chen, Kuilin; Yu, Jie

    2014-01-01

    Highlights: • A novel hybrid modeling method is proposed for short-term wind speed forecasting. • Support vector regression model is constructed to formulate nonlinear state-space framework. • Unscented Kalman filter is adopted to recursively update states under random uncertainty. • The new SVR–UKF approach is compared to several conventional methods for short-term wind speed prediction. • The proposed method demonstrates higher prediction accuracy and reliability. - Abstract: Accurate wind speed forecasting is becoming increasingly important to improve and optimize renewable wind power generation. Particularly, reliable short-term wind speed prediction can enable model predictive control of wind turbines and real-time optimization of wind farm operation. However, this task remains challenging due to the strong stochastic nature and dynamic uncertainty of wind speed. In this study, unscented Kalman filter (UKF) is integrated with support vector regression (SVR) based state-space model in order to precisely update the short-term estimation of wind speed sequence. In the proposed SVR–UKF approach, support vector regression is first employed to formulate a nonlinear state-space model and then unscented Kalman filter is adopted to perform dynamic state estimation recursively on wind sequence with stochastic uncertainty. The novel SVR–UKF method is compared with artificial neural networks (ANNs), SVR, autoregressive (AR) and autoregressive integrated with Kalman filter (AR-Kalman) approaches for predicting short-term wind speed sequences collected from three sites in Massachusetts, USA. The forecasting results indicate that the proposed method has much better performance in both one-step-ahead and multi-step-ahead wind speed predictions than the other approaches across all the locations

  5. Colin Rowe: Space as well-composed illusion

    Directory of Open Access Journals (Sweden)

    Christoph Schnoor

    2011-12-01

    Full Text Available Architectural historian Colin Rowe, although well known for his intriguing analytical writings on modern architecture, rarely examined architectural space as a scholarly subject-matter. Historians examining Rowe’s writings rarely refer to the issue of space, either. Anthony Vidler, Werner Oechslin, Alexander Caragonne and others have examined Rowe’s investigations into urban space, his analyses of formal principles in architecture, or his critical stance towards the myths of modernism, but have not singled out architectural space as subject matter. Nevertheless, this paper argues that Rowe is indeed one of the few post-war historians writing in the English language to have conveyed analyses of architectural space, particularly in the volume The Mathematics of the Ideal Villa (1976. The paper examines how Rowe understood architectural space as relevant only when not seen as ‘pure’ but ‘contaminated’ with ambiguity and active character: notions of flatness versus depth and horizontal versus vertical, as well as the overlapping of conflicting scales or whole structural or spatial systems are central for Rowe’s reading of architectural space, which is also always infused with an idea of movement. Further, the paper traces influences of Rowe’s approach beyond the obvious influence by Rudolf Wittkower to Heinrich Wölfflin’s style and method, partially conveyed through the translation of Sigfried Giedion’s writings.

  6. The third spatial dimension risk approach for individual risk and group risk in multiple use of space

    International Nuclear Information System (INIS)

    Suddle, Shahid; Ale, Ben

    2005-01-01

    Buildings above roads and railways are examples of multiple use of space. Safety is one of the critical issues for such projects. Risk analyses can be undertaken to investigate what safety measures that are required to realise these projects. The results of these analyses can also be compared to risk acceptance criteria, if they are applicable. In The Netherlands, there are explicit criteria for acceptability of individual risk and societal risk. Traditionally calculations of individual risk result in contours of equal risk on a map and thus are considered in two-dimensional space only. However, when different functions are layered the third spatial dimension, height, becomes an important parameter. The various activities and structures above and below each other impose mutual risks. There are no explicit norms or policies about how to deal with the individual or group risk approach in the third dimension. This paper proposes an approach for these problems and gives some examples. Finally, the third dimension risk approach is applied in a case study of Bos en Lommer, Amsterdam

  7. Multiple Model-Based Synchronization Approaches for Time Delayed Slaving Data in a Space Launch Vehicle Tracking System

    Directory of Open Access Journals (Sweden)

    Haryong Song

    2016-01-01

    Full Text Available Due to the inherent characteristics of the flight mission of a space launch vehicle (SLV, which is required to fly over very large distances and have very high fault tolerances, in general, SLV tracking systems (TSs comprise multiple heterogeneous sensors such as radars, GPS, INS, and electrooptical targeting systems installed over widespread areas. To track an SLV without interruption and to hand over the measurement coverage between TSs properly, the mission control system (MCS transfers slaving data to each TS through mission networks. When serious network delays occur, however, the slaving data from the MCS can lead to the failure of the TS. To address this problem, in this paper, we propose multiple model-based synchronization (MMS approaches, which take advantage of the multiple motion models of an SLV. Cubic spline extrapolation, prediction through an α-β-γ filter, and a single model Kalman filter are presented as benchmark approaches. We demonstrate the synchronization accuracy and effectiveness of the proposed MMS approaches using the Monte Carlo simulation with the nominal trajectory data of Korea Space Launch Vehicle-I.

  8. Space plasma observations - observations of solar-terrestrial environment. Space Weather Forecast

    International Nuclear Information System (INIS)

    Sagawa, Eiichi; Akioka, Maki

    1996-01-01

    The space environment becomes more important than ever before because of the expansion in the utilization of near-earth space and the increase in the vulnerability of large scale systems on the ground such as electrical power grids. The concept of the Space Weather Forecast program emerged from the accumulation of understanding on basic physical processes and from our activities as one of the regional warning centers of the international network of space environment services. (author)

  9. Space plasma physics stationary processes

    CERN Document Server

    Hasegawa, Akira

    1989-01-01

    During the 30 years of space exploration, important discoveries in the near-earth environment such as the Van Allen belts, the plasmapause, the magnetotail and the bow shock, to name a few, have been made. Coupling between the solar wind and the magnetosphere and energy transfer processes between them are being identified. Space physics is clearly approaching a new era, where the emphasis is being shifted from discoveries to understanding. One way of identifying the new direction may be found in the recent contribution of atmospheric science and oceanography to the development of fluid dynamics. Hydrodynamics is a branch of classical physics in which important discoveries have been made in the era of Rayleigh, Taylor, Kelvin and Helmholtz. However, recent progress in global measurements using man-made satellites and in large scale computer simulations carried out by scientists in the fields of atmospheric science and oceanography have created new activities in hydrodynamics and produced important new discover...

  10. A short essay on quantum black holes and underlying noncommutative quantized space-time

    International Nuclear Information System (INIS)

    Tanaka, Sho

    2017-01-01

    We emphasize the importance of noncommutative geometry or Lorenz-covariant quantized space-time towards the ultimate theory of quantum gravity and Planck scale physics. We focus our attention on the statistical and substantial understanding of the Bekenstein–Hawking area-entropy law of black holes in terms of the kinematical holographic relation (KHR). KHR manifestly holds in Yang’s quantized space-time as the result of kinematical reduction of spatial degrees of freedom caused by its own nature of noncommutative geometry, and plays an important role in our approach without any recourse to the familiar hypothesis, so-called holographic principle. In the present paper, we find a unified form of KHR applicable to the whole region ranging from macroscopic to microscopic scales in spatial dimension d   =  3. We notice a possibility of nontrivial modification of area-entropy law of black holes which becomes most remarkable in the extremely microscopic system close to Planck scale. (paper)

  11. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Synopsis Session III and IV 'Water and ion mobility, up-scaling and implementation in model approaches'

    International Nuclear Information System (INIS)

    2013-01-01

    The contributions of Session III 'Water and ion mobility' and Session IV 'Up-scaling and implementation in model approaches' were merged for the proceedings volume. The range of scales we are interested in starts at molecular scale (1-3 Angstrom) to crystal scale (3 Angstrom-2 nm) over particle scale with 2-200 nm dimension to the particle/macro-aggregate scale with 0.2-1500 μm. Methods available to study the particle scale concerning pore structure and connectivity which determines water mobility are under dry conditions N 2 adsorption and Hg intrusion, whereas under the hydrated state methods like X-Ray tomography and X-ray and neutron scattering are available. Going down in size molecular modeling, x-ray and neutron diffraction modeling and water adsorption gravimetry are inter alia available. There are resolution limits to the methods presented in session II (e.g. BIB-SEM) on pore characterization as e.g. the clay matrix characterization being only possible under a limited clay induration and pore throats being on the limit of resolution. These pore throats however are very important for as macroscopic phenomena observed. One methodological approach to bridge the gap between the molecular/crystal scale and the particle/macro-aggregate scale (FIB-SEM) is to use complementary techniques as cryo-NMR, N 2 and water ad-/desorption and TEM

  13. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  14. A real-space renormalization approach to the Kubo-Greenwood formula in mirror Fibonacci systems

    International Nuclear Information System (INIS)

    Sanchez, Vicenta; Wang Chumin

    2006-01-01

    An exact real-space renormalization method is developed to address the electronic transport in mirror Fibonacci chains at a macroscopic scale by means of the Kubo-Greenwood formula. The results show that the mirror symmetry induces a large number of transparent states in the dc conductivity spectra, contrary to the simple Fibonacci case. A length scaling analysis over ten orders of magnitude reveals the existence of critically localized states and their ac conduction spectra show a highly oscillating behaviour. For multidimensional quasiperiodic systems, a novel renormalization plus convolution method is proposed. This combined renormalization + convolution method has shown an extremely elevated computing efficiency, being able to calculate electrical conductance of a three-dimensional non-crystalline solid with 10 30 atoms. Finally, the dc and ac conductances of mirror Fibonacci nanowires are also investigated, where a quantized dc-conductance variation with the Fermi energy is found, as observed in gold nanowires

  15. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  16. Mapping spaces and automorphism groups of toric noncommutative spaces

    Science.gov (United States)

    Barnes, Gwendolyn E.; Schenkel, Alexander; Szabo, Richard J.

    2017-09-01

    We develop a sheaf theory approach to toric noncommutative geometry which allows us to formalize the concept of mapping spaces between two toric noncommutative spaces. As an application, we study the `internalized' automorphism group of a toric noncommutative space and show that its Lie algebra has an elementary description in terms of braided derivations.

  17. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  18. Modelling airborne gravity data by means of adapted Space-Wise approach

    Science.gov (United States)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  19. Quadratic inner element subgrid scale discretisation of the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.; Eaton, M.D.; Warner, P.

    2012-01-01

    This paper explores the application of the inner element subgrid scale method to the Boltzmann transport equation using quadratic basis functions. Previously, only linear basis functions for both the coarse scale and the fine scale were considered. This paper, therefore, analyses the advantages of using different coarse and subgrid basis functions for increasing the accuracy of the subgrid scale method. The transport of neutral particle radiation may be described by the Boltzmann transport equation (BTE) which, due to its 7 dimensional phase space, is computationally expensive to resolve. Multi-scale methods offer an approach to efficiently resolve the spatial dimensions of the BTE by separating the solution into its coarse and fine scales and formulating a solution whereby only the computationally efficient coarse scales need to be solved. In previous work an inner element subgrid scale method was developed that applied a linear continuous and discontinuous finite element method to represent the solution’s coarse and fine scale components. This approach was shown to generate efficient and stable solutions, and so this article continues its development by formulating higher order quadratic finite element expansions over the continuous and discontinuous scales. Here it is shown that a solution’s convergence can be improved significantly using higher order basis functions. Furthermore, by using linear finite elements to represent coarse scales in combination with quadratic fine scales, convergence can also be improved with only a modest increase in computational expense.

  20. A study of safeguards approach for the area of plutonium evaporator in a large scale reprocessing plant

    International Nuclear Information System (INIS)

    Sakai, Hirotada; Ikawa, Koji

    1994-01-01

    A preliminary study on a safeguards approach for the chemical processing area in a large scale reprocessing plant has been carried out. In this approach, plutonium inventory at the plutonium evaporator will not be taken, but containment and surveillance (C/S) measures will be applied to ensure the integrity of an area specifically defined to include the plutonium evaporator. The plutonium evaporator area consists of the evaporator itself and two accounting points, i.e., one before the plutonium evaporator and the other after the plutonium evaporator. For newly defined accounting points, two alternative measurement methods, i.e., accounting vessels with high accuracy and flow meters, were examined. Conditions to provide the integrity of the plutonium evaporator area were also examined as well as other technical aspects associated with this approach. The results showed that an appropriate combination of NRTA and C/S measures would be essential to realize a cost effective safeguards approach to be applied for a large scale reprocessing plant. (author)

  1. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto [Digital del IPN, CITEDI-IPN, Tijuana, (Mexico)

    2009-12-15

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  2. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    International Nuclear Information System (INIS)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto

    2009-01-01

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  3. Space Acquisitions: Challenges Facing DOD as it Changes Approaches to Space Acquisitions

    Science.gov (United States)

    2016-03-09

    alternatives to support decisions about the future of space programs, there are gaps in cost and other data needed to weigh the pros and cons of changes to...preliminary work suggests there are gaps in cost and other data needed to weigh the pros and cons of changes to space systems. Second, most changes...Facebook, Flickr, Twitter, and YouTube . Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts and read The Watchblog. Visit GAO on the

  4. Polymer density functional theory approach based on scaling second-order direct correlation function.

    Science.gov (United States)

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  5. Theory of function spaces

    CERN Document Server

    Triebel, Hans

    1983-01-01

    The book deals with the two scales Bsp,q and Fsp,q of spaces of distributions, where -8spaces, such as Hölder spaces, Zygmund classes, Sobolev spaces, Besov spaces, Bessel-potential spaces, Hardy spaces and spaces of BMO-type. It is the main aim of this book to give a unified treatment of the corresponding spaces on the Euclidean n-space Rn in the framework of Fourier analysis, which is based on the technique of maximal functions, Fourier multipliers and interpolation assertions. These topics are treated in Chapter 2, which is the heart

  6. BRST quantization of Yang-Mills theory: A purely Hamiltonian approach on Fock space

    Science.gov (United States)

    Öttinger, Hans Christian

    2018-04-01

    We develop the basic ideas and equations for the BRST quantization of Yang-Mills theories in an explicit Hamiltonian approach, without any reference to the Lagrangian approach at any stage of the development. We present a new representation of ghost fields that combines desirable self-adjointness properties with canonical anticommutation relations for ghost creation and annihilation operators, thus enabling us to characterize the physical states on a well-defined Fock space. The Hamiltonian is constructed by piecing together simple BRST invariant operators to obtain a minimal invariant extension of the free theory. It is verified that the evolution equations implied by the resulting minimal Hamiltonian provide a quantum version of the classical Yang-Mills equations. The modifications and requirements for the inclusion of matter are discussed in detail.

  7. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  8. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Space Station Freedom - Approaching the critical design phase

    Science.gov (United States)

    Kohrs, Richard H.; Huckins, Earle, III

    1992-01-01

    The status and future developments of the Space Station Freedom are discussed. To date detailed design drawings are being produced to manufacture SSF hardware. A critical design review (CDR) for the man-tended capability configuration is planned to be performed in 1993 under the SSF program. The main objective of the CDR is to enable the program to make a full commitment to proceed to manufacture parts and assemblies. NASA recently signed a contract with the Russian space company, NPO Energia, to evaluate potential applications of various Russian space hardware for on-going NASA programs.

  10. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  11. Computer-aided detection of lung nodules via 3D fast radial transform, scale space representation, and Zernike MIP classification.

    Science.gov (United States)

    Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato

    2011-04-01

    The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.

  12. Ethical approach to digital skills. Sense and use in virtual educational spaces

    Directory of Open Access Journals (Sweden)

    Juan GARCÍA-GUTIÉRREZ

    2013-12-01

    Full Text Available In the context of technology and cyberspace, should we do everything we can do? The answer given to this question is not ethical, is political: safety. The safety and security are overshadowing the ethical question about the meaning of technology. Cyberspace imposes a "new logic" and new forms of "ownership". When it comes to the Internet in relation to children not always adopt logic of accountability to the cyberspace, Internet showing a space not only ethical and technical. We talk about safe Internet, Internet healthy, and Internet Fit for Children... why not talk over Internet ethics? With this work we approach digital skills as those skills that help us to position ourselves and guide us in cyberspace. Something that is not possible without also ethical skills. So, in this article we will try to build and propose a model for analyzing the virtual learning spaces (and cyberspace in general based on the categories of "use" and "sense" as different levels of ownership that indicate the types of competences needed to access cyberspace.  

  13. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  14. Length scale for configurational entropy in microemulsions

    NARCIS (Netherlands)

    Reiss, H.; Kegel, W.K.; Groenewold, J.

    1996-01-01

    In this paper we study the length scale that must be used in evaluating the mixing entropy in a microemulsion. The central idea involves the choice of a length scale in configuration space that is consistent with the physical definition of entropy in phase space. We show that this scale may be

  15. College students with Internet addiction decrease fewer Behavior Inhibition Scale and Behavior Approach Scale when getting online.

    Science.gov (United States)

    Ko, Chih-Hung; Wang, Peng-Wei; Liu, Tai-Ling; Yen, Cheng-Fang; Chen, Cheng-Sheng; Yen, Ju-Yu

    2015-09-01

    The aim of the study is to compare the reinforcement sensitivity between online and offline interaction. The effect of gender, Internet addiction, depression, and online gaming on the difference of reinforcement sensitivity between online and offline were also evaluated. The subjects were 2,258 college students (1,066 men and 1,192 women). They completed the Behavior Inhibition Scale and Behavior Approach Scale (BIS/BAS) according to their experience online or offline. Internet addiction, depression, and Internet activity type were evaluated simultaneously. The results showed that reinforcement sensitivity was lower when interacting online than when interacting offline. College students with Internet addiction decrease fewer score on BIS and BAS after getting online than did others. The higher reward and aversion sensitivity are associated with the risk of Internet addiction. The fun seeking online might contribute to the maintenance of Internet addiction. This suggests that reinforcement sensitivity would change after getting online and would contribute to the risk and maintenance of Internet addiction. © 2014 Wiley Publishing Asia Pty Ltd.

  16. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  17. Novel Approaches to Cellular Transplantation from the US Space Program

    Science.gov (United States)

    Pellis, Neal R.; Homick, Jerry L. (Technical Monitor)

    1999-01-01

    Research in the treatment of type I diabetes is entering a new era that takes advantage of our knowledge in an ever increasing variety of scientific disciplines. Some may originate from very diverse sources, one of which is the Space Program at National Aeronautics and Space Administration (NASA). The Space Program contributes to diabetes-related research in several treatment modalities. As an ongoing effort for medical monitoring of personnel involved in space exploration activities NASA and the extramural scientific community investigate strategies for noninvasive estimation of blood glucose levels. Part of the effort in the space protein crystal growth program is high-resolution structural analysis insulin as a means to better understand the interaction with its receptor and with host immune components and as a basis for rational design of a "better" insulin molecule. The Space Program is also developing laser technology for potential early cataract detection as well as a noninvasive analyses for addressing preclinical diabetic retinopathy. Finally, NASA developed an exciting cell culture system that affords some unique advantages in the propagation and maintenance of mammalian cells in vitro. The cell culture system was originally designed to maintain cell suspensions with a minimum of hydrodynamic and mechanical sheer while awaiting launch into microgravity. Currently the commercially available NASA bioreactor (Synthecon, Inc., Houston, TX) is used as a research tool in basic and applied cell biology. In recent years there is continued strong interest in cellular transplantation as treatment for type I diabetes. The advantages are the potential for successful long-term amelioration and a minimum risk for morbidity in the event of rejection of the transplanted cells. The pathway to successful application of this strategy is accompanied by several substantial hurdles: (1) isolation and propagation of a suitable uniform donor cell population; (2) management of

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. Characterizing the performance of ecosystem models across time scales: A spectral analysis of the North American Carbon Program site-level synthesis

    Science.gov (United States)

    Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng

    2011-01-01

    Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...

  20. Flight Test Approach to Adaptive Control Research

    Science.gov (United States)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  1. Space-time reference with an optical link

    International Nuclear Information System (INIS)

    Berceau, P; Hollberg, L; Taylor, M; Kahn, J

    2016-01-01

    We describe a concept for realizing a high performance space-time reference using a stable atomic clock in a precisely defined orbit and synchronizing the orbiting clock to high-accuracy atomic clocks on the ground. The synchronization would be accomplished using a two-way lasercom link between ground and space. The basic approach is to take advantage of the highest-performance cold-atom atomic clocks at national standards laboratories on the ground and to transfer that performance to an orbiting clock that has good stability and that serves as a ‘frequency-flywheel’ over time-scales of a few hours. The two-way lasercom link would also provide precise range information and thus precise orbit determination. With a well-defined orbit and a synchronized clock, the satellite could serve as a high-accuracy space-time reference, providing precise time worldwide, a valuable reference frame for geodesy, and independent high-accuracy measurements of GNSS clocks. Under reasonable assumptions, a practical system would be able to deliver picosecond timing worldwide and millimeter orbit determination, and could serve as an enabling subsystem for other proposed space-gravity missions, which are briefly reviewed. (paper)

  2. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  3. Exploring G Protein-Coupled Receptors (GPCRs) Ligand Space via Cheminformatics Approaches: Impact on Rational Drug Design

    Science.gov (United States)

    Basith, Shaherin; Cui, Minghua; Macalino, Stephani J. Y.; Park, Jongmi; Clavio, Nina A. B.; Kang, Soosung; Choi, Sun

    2018-01-01

    The primary goal of rational drug discovery is the identification of selective ligands which act on single or multiple drug targets to achieve the desired clinical outcome through the exploration of total chemical space. To identify such desired compounds, computational approaches are necessary in predicting their drug-like properties. G Protein-Coupled Receptors (GPCRs) represent one of the largest and most important integral membrane protein families. These receptors serve as increasingly attractive drug targets due to their relevance in the treatment of various diseases, such as inflammatory disorders, metabolic imbalances, cardiac disorders, cancer, monogenic disorders, etc. In the last decade, multitudes of three-dimensional (3D) structures were solved for diverse GPCRs, thus referring to this period as the “golden age for GPCR structural biology.” Moreover, accumulation of data about the chemical properties of GPCR ligands has garnered much interest toward the exploration of GPCR chemical space. Due to the steady increase in the structural, ligand, and functional data of GPCRs, several cheminformatics approaches have been implemented in its drug discovery pipeline. In this review, we mainly focus on the cheminformatics-based paradigms in GPCR drug discovery. We provide a comprehensive view on the ligand– and structure-based cheminformatics approaches which are best illustrated via GPCR case studies. Furthermore, an appropriate combination of ligand-based knowledge with structure-based ones, i.e., integrated approach, which is emerging as a promising strategy for cheminformatics-based GPCR drug design is also discussed. PMID:29593527

  4. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    Science.gov (United States)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  5. An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space

    Science.gov (United States)

    Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce

    2012-01-01

    Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.

  6. Space technology and robotics in school projects

    Science.gov (United States)

    Villias, Georgios

    2016-04-01

    Space-related educational activities is a very inspiring and attractive way to involve students into science courses, present them the variety of STEM careers that they can follow, while giving them at the same time the opportunity to develop various practical and communication skills necessary for their future professional development. As part of a large scale extracurricular course in Space Science, Space Technology and Robotics that has been introduced in our school, our students, divided in smaller groups of 3-4 students in each, try to understand the challenges that current and future space exploration is facing. Following a mixture of an inquiry-based learning methodology and hands-on practical activities related with constructions and experiments, students get a glimpse of the pre-mentioned fields. Our main goal is to gain practical knowledge and inspiration from the exciting field of Space, to attain an adequate level of team spirit and effective cooperation, while developing technical and research data-mining skills. We use the following two approaches: 1. Constructive (Technical) approach Designing and constructing various customized robotic machines, that will simulate the future space exploration vehicles and satellites needed to study the atmosphere, surface and subsurface of planets, moons or other planetary bodies of our solar system that have shown some promising indications for the existence of life, taking seriously into account their special characteristics and known existing conditions (like Mars, Titan, Europa & Enceladus). The STEM tools we use are the following: - LEGO Mindstorms: to construct rovers for surface exploration. - Hydrobots: an MIT's SeaPerch program for the construction of submarine semi-autonomous robots. - CanSats: Arduino-based microsatellites able to receive, record & transmit data. - Space balloons: appropriate for high altitude atmospheric measurements & photography. 2. Scientific approach Conducting interesting physics

  7. Operator space approach to steering inequality

    International Nuclear Information System (INIS)

    Yin, Zhi; Marciniak, Marcin; Horodecki, Michał

    2015-01-01

    In Junge and Palazuelos (2011 Commun. Math. Phys. 306 695–746) and Junge et al (2010 Commun. Math. Phys. 300 715–39) the operator space theory was applied to study bipartite Bell inequalities. The aim of the paper is to follow this line of research and use the operator space technique to analyze the steering scenario. We obtain a bipartite steering functional with unbounded largest violation of steering inequality, as well as constructing all ingredients explicitly. It turns out that the unbounded largest violation is obtained by a non maximally entangled state. Moreover, we focus on the bipartite dichotomic case where we construct a steering functional with unbounded largest violation of steering inequality. This phenomenon is different to the Bell scenario where only the bounded largest violation can be obtained by any bipartite dichotomic Bell functional. (paper)

  8. Soft Space Planning in Cities unbound

    DEFF Research Database (Denmark)

    Olesen, Kristian

    This paper analyses contemporary experiments of building governance capacity in new soft spaces in Denmark through processes of spatial strategy-making. The paper argues that new soft spaces are emerging in Danish spatial planning, which set out to promote more effective forms of strategic spatial...... planning, and how their obsession with promoting economic development at the expense of wider planning responsibilities support contemporary neoliberal transformations of strategic spatial planning....... planning. The Danish case of soft space planning demonstrates how Danish soft spaces at subnational scales fail to fill in the gaps between formal planning structures and provide the glue that binds formal scales of planning together as promised in the soft space literature. This raises a number...

  9. Validation of a plant-wide phosphorus modelling approach with minerals precipitation in a full-scale WWTP

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Flores Alsina, Xavier; Batstone, Damien John

    2016-01-01

    approach describing ion speciation and ion pairing with kinetic multiple minerals precipitation. Model performance is evaluated against data sets from a full-scale wastewater treatment plant, assessing capability to describe water and sludge lines across the treatment process under steady-state operation...... plant. Dynamic influent profiles were generated using a calibrated influent generator and were used to study the effect of long-term influent dynamics on plant performance. Model-based analysis shows that minerals precipitation strongly influences composition in the anaerobic digesters, but also impacts......The focus of modelling in wastewater treatment is shifting from single unit to plant-wide scale. Plant wide modelling approaches provide opportunities to study the dynamics and interactions of different transformations in water and sludge streams. Towards developing more general and robust...

  10. Third International Scientific and Practical Conference «Space Travel is Approaching Reality» (Successful Event in Difficult Times

    Directory of Open Access Journals (Sweden)

    Matusevych Tetiana

    2015-02-01

    Full Text Available The article analyzes the presentations of participants of III International Scientific and Practical Conference «Space Travel – approaching reality», held on 6–7 November 2014 in Kharkiv, Ukraine

  11. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  12. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem

    OpenAIRE

    Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio

    2012-01-01

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous...

  13. Space-Based Counterforce in the Second Nuclear Age

    Science.gov (United States)

    2015-04-01

    but also open wide the gates of the solar system to large-scale human exploration and development. Instead of offering only a dark age of...by the Scaled Composites SpaceShipOne vehicle that won the Ansari X-PRIZE in 2004 or Virgin Galactic’s space tourism vehicle SpaceShipTwo. It was

  14. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  15. Geometric approach to evolution problems in metric spaces

    NARCIS (Netherlands)

    Stojković, Igor

    2011-01-01

    This PhD thesis contains four chapters where research material is presented. In the second chapter the extension of the product formulas for semigroups induced by convex functionals, from the classical Hilbert space setting to the setting of general CAT(0) spaces. In the third chapter, the

  16. Evaluating public space pedestrian accessibility: a GIS approach

    NARCIS (Netherlands)

    Morar, T.; Bertolini, L.; Radoslav, R.

    2013-01-01

    Public spaces are sources of quality of life in neighborhoods. Seeking to help professionals and municipalities assess how well a public space can be used by the community it serves, this paper presents a GIS-based methodology for evaluating its pedestrian accessibility. The Romanian city of

  17. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  18. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    Science.gov (United States)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  19. A new generic approach for estimating the concentrations of down-the-drain chemicals at catchment and national scale

    Energy Technology Data Exchange (ETDEWEB)

    Keller, V.D.J. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom)]. E-mail: vke@ceh.ac.uk; Rees, H.G. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom); Fox, K.K. [University of Lancaster (United Kingdom); Whelan, M.J. [Unilever Safety and Environmental Assurance Centre, Colworth (United Kingdom)

    2007-07-15

    A new generic approach for estimating chemical concentrations in rivers at catchment and national scales is presented. Domestic chemical loads in waste water are estimated using gridded population data. River flows are estimated by combining predicted runoff with topographically derived flow direction. Regional scale exposure is characterised by two summary statistics: PEC{sub works}, the average concentration immediately downstream of emission points, and, PEC{sub area}, the catchment-average chemical concentration. The method was applied to boron at national (England and Wales) and catchment (Aire-Calder) scales. Predicted concentrations were within 50% of measured mean values in the Aire-Calder catchment and in agreement with results from the GREAT-ER model. The concentration grids generated provide a picture of the spatial distribution of expected chemical concentrations at various scales, and can be used to identify areas of potentially high risk. - A new grid-based approach to predict spatially-referenced freshwater concentrations of domestic chemicals.

  20. Innovative Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    International Nuclear Information System (INIS)

    Hill, T.; Noble, C.; Martinell, J.; Borowski, S.

    2000-01-01

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonably assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible

  1. Innovation Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, T.; Noble, C.; Martinell, J. (INEEL); Borowski, S. (NASA Glenn Research Center)

    2000-07-14

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonably assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  2. Innovative Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, Thomas Johnathan; Noble, Cheryl Ann; Noble, C.; Martinell, John Stephen; Borowski, S.

    2000-07-01

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonable assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  3. Modeling solvation effects in real-space and real-time within density functional approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, Alain [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy); Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana (Cuba); Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy)

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  4. A multi-scale spatial approach to address environmental effects of small hydropower development.

    Science.gov (United States)

    McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C

    2015-01-01

    Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.

  5. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  6. Human Space Exploration and Human Space Flight: Latency and the Cognitive Scale of the Universe

    Science.gov (United States)

    Lester, Dan; Thronson, Harley

    2011-01-01

    The role of telerobotics in space exploration as placing human cognition on other worlds is limited almost entirely by the speed of light, and the consequent communications latency that results from large distances. This latency is the time delay between the human brain at one end, and the telerobotic effector and sensor at the other end. While telerobotics and virtual presence is a technology that is rapidly becoming more sophisticated, with strong commercial interest on the Earth, this time delay, along with the neurological timescale of a human being, quantitatively defines the cognitive horizon for any locale in space. That is, how distant can an operator be from a robot and not be significantly impacted by latency? We explore that cognitive timescale of the universe, and consider the implications for telerobotics, human space flight, and participation by larger numbers of people in space exploration. We conclude that, with advanced telepresence, sophisticated robots could be operated with high cognition throughout a lunar hemisphere by astronauts within a station at an Earth-Moon Ll or L2 venue. Likewise, complex telerobotic servicing of satellites in geosynchronous orbit can be carried out from suitable terrestrial stations.

  7. A new time-space accounting scheme to predict stream water residence time and hydrograph source components at the watershed scale

    Science.gov (United States)

    Takahiro Sayama; Jeffrey J. McDonnell

    2009-01-01

    Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...

  8. Tools in the orbit space approach to the study of invariant functions: rational parametrization of strata

    International Nuclear Information System (INIS)

    Sartori, G; Valente, G

    2003-01-01

    Functions which are equivariant or invariant under the transformations of a compact linear group G acting in a Euclidean space R n , can profitably be studied as functions defined in the orbit space of the group. The orbit space is the union of a finite set of strata, which are semialgebraic manifolds formed by the G-orbits with the same orbit-type. In this paper, we provide a simple recipe to obtain rational parametrizations of the strata. Our results can be easily exploited, in many physical contexts where the study of equivariant or invariant functions is important, for instance in the determination of patterns of spontaneous symmetry breaking, in the analysis of phase spaces and structural phase transitions (Landau theory), in equivariant bifurcation theory, in crystal field theory and in most areas where use is made of symmetry-adapted functions. A physically significant example of utilization of the recipe is given, related to spontaneous polarization in chiral biaxial liquid crystals, where the advantages with respect to previous heuristic approaches are shown

  9. Tools in the orbit space approach to the study of invariant functions: rational parametrization of strata

    Energy Technology Data Exchange (ETDEWEB)

    Sartori, G; Valente, G [Dipartimento di Fisica, Universita di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy)

    2003-02-21

    Functions which are equivariant or invariant under the transformations of a compact linear group G acting in a Euclidean space R{sup n}, can profitably be studied as functions defined in the orbit space of the group. The orbit space is the union of a finite set of strata, which are semialgebraic manifolds formed by the G-orbits with the same orbit-type. In this paper, we provide a simple recipe to obtain rational parametrizations of the strata. Our results can be easily exploited, in many physical contexts where the study of equivariant or invariant functions is important, for instance in the determination of patterns of spontaneous symmetry breaking, in the analysis of phase spaces and structural phase transitions (Landau theory), in equivariant bifurcation theory, in crystal field theory and in most areas where use is made of symmetry-adapted functions. A physically significant example of utilization of the recipe is given, related to spontaneous polarization in chiral biaxial liquid crystals, where the advantages with respect to previous heuristic approaches are shown.

  10. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  11. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  12. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  13. State-space approach to evaluate spatial variability of field measured soil water status along a line transect in a volcanic-vesuvian soil

    Directory of Open Access Journals (Sweden)

    A. Comegna

    2010-12-01

    Full Text Available Unsaturated hydraulic properties and their spatial variability today are analyzed in order to use properly mathematical models developed to simulate flow of the water and solute movement at the field-scale soils. Many studies have shown that observations of soil hydraulic properties should not be considered purely random, given that they possess a structure which may be described by means of stochastic processes. The techniques used for analyzing such a structure have essentially been based either on the theory of regionalized variables or to a lesser extent, on the analysis of time series. This work attempts to use the time-series approach mentioned above by means of a study of pressure head h and water content θ which characterize soil water status, in the space-time domain. The data of the analyses were recorded in the open field during a controlled drainage process, evaporation being prevented, along a 50 m transect in a volcanic Vesuvian soil. The isotropic hypothesis is empirical proved and then the autocorrelation ACF and the partial autocorrelation functions PACF were used to identify and estimate the ARMA(1,1 statistical model for the analyzed series and the AR(1 for the extracted signal. Relations with a state-space model are investigated, and a bivariate AR(1 model fitted. The simultaneous relations between θ and h are considered and estimated. The results are of value for sampling strategies and they should incite to a larger use of time and space series analysis.

  14. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  15. A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance

    Directory of Open Access Journals (Sweden)

    Na Dong

    2016-01-01

    Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.

  16. Space Station fluid management logistics

    Science.gov (United States)

    Dominick, Sam M.

    1990-01-01

    Viewgraphs and discussion on space station fluid management logistics are presented. Topics covered include: fluid management logistics - issues for Space Station Freedom evolution; current fluid logistics approach; evolution of Space Station Freedom fluid resupply; launch vehicle evolution; ELV logistics system approach; logistics carrier configuration; expendable fluid/propellant carrier description; fluid carrier design concept; logistics carrier orbital operations; carrier operations at space station; summary/status of orbital fluid transfer techniques; Soviet progress tanker system; and Soviet propellant resupply system observations.

  17. Non-Abelian gauge field theory in scale relativity

    International Nuclear Information System (INIS)

    Nottale, Laurent; Celerier, Marie-Noeelle; Lehner, Thierry

    2006-01-01

    Gauge field theory is developed in the framework of scale relativity. In this theory, space-time is described as a nondifferentiable continuum, which implies it is fractal, i.e., explicitly dependent on internal scale variables. Owing to the principle of relativity that has been extended to scales, these scale variables can themselves become functions of the space-time coordinates. Therefore, a coupling is expected between displacements in the fractal space-time and the transformations of these scale variables. In previous works, an Abelian gauge theory (electromagnetism) has been derived as a consequence of this coupling for global dilations and/or contractions. We consider here more general transformations of the scale variables by taking into account separate dilations for each of them, which yield non-Abelian gauge theories. We identify these transformations with the usual gauge transformations. The gauge fields naturally appear as a new geometric contribution to the total variation of the action involving these scale variables, while the gauge charges emerge as the generators of the scale transformation group. A generalized action is identified with the scale-relativistic invariant. The gauge charges are the conservative quantities, conjugates of the scale variables through the action, which find their origin in the symmetries of the ''scale-space.'' We thus found in a geometric way and recover the expression for the covariant derivative of gauge theory. Adding the requirement that under the scale transformations the fermion multiplets and the boson fields transform such that the derived Lagrangian remains invariant, we obtain gauge theories as a consequence of scale symmetries issued from a geometric space-time description

  18. Scaling approach in predicting the seatbelt loading and kinematics of vulnerable occupants: How far can we go?

    Science.gov (United States)

    Nie, Bingbing; Forman, Jason L; Joodaki, Hamed; Wu, Taotao; Kent, Richard W

    2016-09-01

    Occupants with extreme body size and shape, such as the small female or the obese, were reported to sustain high risk of injury in motor vehicle crashes (MVCs). Dimensional scaling approaches are widely used in injury biomechanics research based on the assumption of geometrical similarity. However, its application scope has not been quantified ever since. The objective of this study is to demonstrate the valid range of scaling approaches in predicting the impact response of the occupants with focus on the vulnerable populations. The present analysis was based on a data set consisting of 60 previously reported frontal crash tests in the same sled buck representing a typical mid-size passenger car. The tests included two categories of human surrogates: 9 postmortem human surrogates (PMHS) of different anthropometries (stature range: 147-189 cm; weight range: 27-151 kg) and 5 anthropomorphic test devices (ATDs). The impact response was considered including the restraint loads and the kinematics of multiple body segments. For each category of the human surrogates, a mid-size occupant was selected as a baseline and the impact response was scaled specifically to another subject based on either the body mass (body shape) or stature (the overall body size). To identify the valid range of the scaling approach, the scaled response was compared to the experimental results using assessment scores on the peak value, peak timing (the time when the peak value occurred), and the overall curve shape ranging from 0 (extremely poor) to 1 (perfect match). Scores of 0.7 to 0.8 and 0.8 to 1.0 indicate fair and acceptable prediction. For both ATDs and PMHS, the scaling factor derived from body mass proved an overall good predictor of the peak timing for the shoulder belt (0.868, 0.829) and the lap belt (0.858, 0.774) and for the peak value of the lap belt force (0.796, 0.869). Scaled kinematics based on body stature provided fair or acceptable prediction on the overall head

  19. Digital Cellular Solid Pressure Vessels: A Novel Approach for Human Habitation in Space

    Science.gov (United States)

    Cellucci, Daniel; Jenett, Benjamin; Cheung, Kenneth C.

    2017-01-01

    It is widely assumed that human exploration beyond Earth's orbit will require vehicles capable of providing long duration habitats that simulate an Earth-like environment - consistent artificial gravity, breathable atmosphere, and sufficient living space- while requiring the minimum possible launch mass. This paper examines how the qualities of digital cellular solids - high-performance, repairability, reconfigurability, tunable mechanical response - allow the accomplishment of long-duration habitat objectives at a fraction of the mass required for traditional structural technologies. To illustrate the impact digital cellular solids could make as a replacement to conventional habitat subsystems, we compare recent proposed deep space habitat structural systems with a digital cellular solids pressure vessel design that consists of a carbon fiber reinforced polymer (CFRP) digital cellular solid cylindrical framework that is lined with an ultra-high molecular weight polyethylene (UHMWPE) skin. We use the analytical treatment of a linear specific modulus scaling cellular solid to find the minimum mass pressure vessel for a structure and find that, for equivalent habitable volume and appropriate safety factors, the use of digital cellular solids provides clear methods for producing structures that are not only repairable and reconfigurable, but also higher performance than their conventionally manufactured counterparts.

  20. THE PRINCIPLES AND METHODS OF INFORMATION AND EDUCATIONAL SPACE SEMANTIC STRUCTURING BASED ON ONTOLOGIC APPROACH REALIZATION

    Directory of Open Access Journals (Sweden)

    Yurij F. Telnov

    2014-01-01

    Full Text Available This article reveals principles of semantic structuring of information and educational space of objects of knowledge and scientific and educational services with use of methods of ontologic engineering. Novelty of offered approach is interface of ontology of a content and ontology of scientific and educational services that allows to carry out effective composition of services and objects of knowledge according to models of professional competences and requirements being trained. As a result of application of methods of information and educational space semantic structuring integration of use of the diverse distributed scientific and educational content by educational institutions for carrying out scientific researches, methodical development and training is provided.