WorldWideScience

Sample records for scale space approach

  1. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  2. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    Science.gov (United States)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  3. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  4. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  5. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  6. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  7. SPACE BASED INTERCEPTOR SCALING

    Energy Technology Data Exchange (ETDEWEB)

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  8. Generalized probabilistic scale space for image restoration.

    Science.gov (United States)

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  9. Space Sustainment: A New Approach for America in Space

    Science.gov (United States)

    2014-12-01

    international community toward promoting market incentives in international space law. This would open up the competitive space for new entrants ...announces- new -space-situational-awareness-satellite-program.aspx. 29. Gruss, “U.S. Space Assets Face Growing Threat .” 30. McDougall, Heavens and the...November–December 2014 Air & Space Power Journal | 117 SCHRIEVER ESSAY WINNER SECOND PLACE Space Sustainment A New Approach for America in Space Lt

  10. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  11. The use of an active learning approach in a SCALE-UP learning space improves academic performance in undergraduate General Biology.

    Science.gov (United States)

    Hacisalihoglu, Gokhan; Stephens, Desmond; Johnson, Lewis; Edington, Maurice

    2018-01-01

    Active learning is a pedagogical approach that involves students engaging in collaborative learning, which enables them to take more responsibility for their learning and improve their critical thinking skills. While prior research examined student performance at majority universities, this study focuses on specifically Historically Black Colleges and Universities (HBCUs) for the first time. Here we present work that focuses on the impact of active learning interventions at Florida A&M University, where we measured the impact of active learning strategies coupled with a SCALE-UP (Student Centered Active Learning Environment with Upside-down Pedagogies) learning environment on student success in General Biology. In biology sections where active learning techniques were employed, students watched online videos and completed specific activities before class covering information previously presented in a traditional lecture format. In-class activities were then carefully planned to reinforce critical concepts and enhance critical thinking skills through active learning techniques such as the one-minute paper, think-pair-share, and the utilization of clickers. Students in the active learning and control groups covered the same topics, took the same summative examinations and completed identical homework sets. In addition, the same instructor taught all of the sections included in this study. Testing demonstrated that these interventions increased learning gains by as much as 16%, and students reported an increase in their positive perceptions of active learning and biology. Overall, our results suggest that active learning approaches coupled with the SCALE-UP environment may provide an added opportunity for student success when compared with the standard modes of instruction in General Biology.

  12. European Space Science Scales New Heights

    Science.gov (United States)

    1995-06-01

    about two years' budget and medium-size projects accounting for one years budget. It is on the basis of the Horizon 2000 programme that Europe has: launched the Giotto probe, which successfully encountered Comets Halley (1986) and Grigg-Skjellerup (1992); developed the Hipparcos satellite, whose catalogue of 120 000 stars will be published in late 1996; built the Ulysses probe, which has been exploring the third dimension of the solar system since 1992; and contributed at a rate of 20%to the Hubble Space Telescope programme. It is thanks to Horizon 2000 that Europe is now preparing to launch ISO, Soho and Cluster. It is on the basis of the same long-term plan that Europe will build: Huygens, the probe to be launched in 1997, in co-operation with the United States, to explore the organic planet Titan; XMM, the X-ray telescope scheduled for a launch in 1999; Integral, the gamma-ray observatory due to be launched in 2001 in co-operation with Russia; Rosette, the probe which is to land on Comet Wirtanen in 2012; and FIRST, the submillimetre telescope planned to be in orbit in 2006. After a long and fruitful apprenticeship, European space science therefore now looks set to come into its own. It currently ranks an honourable second place in the world and regularly leads the way in certain specific areas of exploration. Thus Europe is now at the forefront of cometary exploration, fundamental astronomy or "astrometry", solar physics and the physics of interplanetary plasma. So it should also be able to take the lead in infrared astronomy, high- energy astronomy and planetary exploration while continuing to conduct cometary studies with Rosetta. One remarkable fact is that the approach and success of Horizon 2000 have attracted unanimous praise both in and beyond Europe. The programme is being supported by virtually all Europe's scien1ilsts. It is drawing on and inspiring increasing numbers of scientists, including many of the younger generation. Its content and management have

  13. Physics in space-time with scale-dependent metrics

    Science.gov (United States)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  14. A scale invariant covariance structure on jet space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2005-01-01

    This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...

  15. Constructive approaches to the space NPP designing

    International Nuclear Information System (INIS)

    Eremin, A.G.; Korobkov, L.S.; Matveev, A.V.; Trukhanov, Yu.L.; Pyshko, A.P.

    2000-01-01

    An example of designing a space NPP intended for power supply of telecommunication satellite is considered. It is shown that the designing approach based on the introduction of a leading criterion and dividing the design problems in two independent groups (reactor with radiation shield and equipment module) permits to develop the optimal design of a space NPP [ru

  16. Quantum universe on extremely small space-time scales

    International Nuclear Information System (INIS)

    Kuzmichev, V.E.; Kuzmichev, V.V.

    2010-01-01

    The semiclassical approach to the quantum geometrodynamical model is used for the description of the properties of the Universe on extremely small space-time scales. Under this approach, the matter in the Universe has two components of the quantum nature which behave as antigravitating fluids. The first component does not vanish in the limit h → 0 and can be associated with dark energy. The second component is described by an extremely rigid equation of state and goes to zero after the transition to large spacetime scales. On small space-time scales, this quantum correction turns out to be significant. It determines the geometry of the Universe near the initial cosmological singularity point. This geometry is conformal to a unit four-sphere embedded in a five-dimensional Euclidean flat space. During the consequent expansion of the Universe, when reaching the post-Planck era, the geometry of the Universe changes into that conformal to a unit four-hyperboloid in a five-dimensional Lorentzsignatured flat space. This agrees with the hypothesis about the possible change of geometry after the origin of the expanding Universe from the region near the initial singularity point. The origin of the Universe can be interpreted as a quantum transition of the system from a region in the phase space forbidden for the classical motion, but where a trajectory in imaginary time exists, into a region, where the equations of motion have the solution which describes the evolution of the Universe in real time. Near the boundary between two regions, from the side of real time, the Universe undergoes almost an exponential expansion which passes smoothly into the expansion under the action of radiation dominating over matter which is described by the standard cosmological model.

  17. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  18. Subjective assessment of impairment in scale-space-coded images

    NARCIS (Netherlands)

    Ridder, de H.; Majoor, G.M.M.

    1988-01-01

    Direct category scaling and a scaling procedure in accordance with Functional Measurement Theory (Anderson, 1982) have been used to assess impairment in scale-space-coded illlages, displayed on a black-and-white TV monitor. The image of a complex scene was passed through a Gaussian filter of limited

  19. Stochastic inflation: Quantum phase-space approach

    International Nuclear Information System (INIS)

    Habib, S.

    1992-01-01

    In this paper a quantum-mechanical phase-space picture is constructed for coarse-grained free quantum fields in an inflationary universe. The appropriate stochastic quantum Liouville equation is derived. Explicit solutions for the phase-space quantum distribution function are found for the cases of power-law and exponential expansions. The expectation values of dynamical variables with respect to these solutions are compared to the corresponding cutoff regularized field-theoretic results (we do not restrict ourselves only to left-angle Φ 2 right-angle). Fair agreement is found provided the coarse-graining scale is kept within certain limits. By focusing on the full phase-space distribution function rather than a reduced distribution it is shown that the thermodynamic interpretation of the stochastic formalism faces several difficulties (e.g., there is no fluctuation-dissipation theorem). The coarse graining does not guarantee an automatic classical limit as quantum correlations turn out to be crucial in order to get results consistent with standard quantum field theory. Therefore, the method does not by itself constitute an explanation of the quantum to classical transition in the early Universe. In particular, we argue that the stochastic equations do not lead to decoherence

  20. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  1. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...... with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  2. Multi-scale Dynamical Processes in Space and Astrophysical Plasmas

    CERN Document Server

    Vörös, Zoltán; IAFA 2011 - International Astrophysics Forum 2011 : Frontiers in Space Environment Research

    2012-01-01

    Magnetized plasmas in the universe exhibit complex dynamical behavior over a huge range of scales. The fundamental mechanisms of energy transport, redistribution and conversion occur at multiple scales. The driving mechanisms often include energy accumulation, free-energy-excited relaxation processes, dissipation and self-organization. The plasma processes associated with energy conversion, transport and self-organization, such as magnetic reconnection, instabilities, linear and nonlinear waves, wave-particle interactions, dynamo processes, turbulence, heating, diffusion and convection represent fundamental physical effects. They demonstrate similar dynamical behavior in near-Earth space, on the Sun, in the heliosphere and in astrophysical environments. 'Multi-scale Dynamical Processes in Space and Astrophysical Plasmas' presents the proceedings of the International Astrophysics Forum Alpbach 2011. The contributions discuss the latest advances in the exploration of dynamical behavior in space plasmas environm...

  3. Large Scale System Safety Integration for Human Rated Space Vehicles

    Science.gov (United States)

    Massie, Michael J.

    2005-12-01

    Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve

  4. Construction of Orthonormal Piecewise Polynomial Scaling and Wavelet Bases on Non-Equally Spaced Knots

    Directory of Open Access Journals (Sweden)

    Jean Pierre Astruc

    2007-01-01

    Full Text Available This paper investigates the mathematical framework of multiresolution analysis based on irregularly spaced knots sequence. Our presentation is based on the construction of nested nonuniform spline multiresolution spaces. From these spaces, we present the construction of orthonormal scaling and wavelet basis functions on bounded intervals. For any arbitrary degree of the spline function, we provide an explicit generalization allowing the construction of the scaling and wavelet bases on the nontraditional sequences. We show that the orthogonal decomposition is implemented using filter banks where the coefficients depend on the location of the knots on the sequence. Examples of orthonormal spline scaling and wavelet bases are provided. This approach can be used to interpolate irregularly sampled signals in an efficient way, by keeping the multiresolution approach.

  5. Multi-Scale Singularity Trees: Soft-Linked Scale-Space Hierarchies

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    We consider images as manifolds embedded in a hybrid of a high dimensional space of coordinates and features. Using the proposed energy functional and mathematical landmarks, images are partitioned into segments. The nesting of image segments occurring at catastrophe points in the scale-space is ...

  6. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  7. Statistical distance and the approach to KNO scaling

    International Nuclear Information System (INIS)

    Diosi, L.; Hegyi, S.; Krasznovszky, S.

    1990-05-01

    A new method is proposed for characterizing the approach to KNO scaling. The essence of our method lies in the concept of statistical distance between nearby KNO distributions which reflects their distinguishability in spite of multiplicity fluctuations. It is shown that the geometry induced by the distance function defines a natural metric on the parameter space of a certain family of KNO distributions. Some examples are given in which the energy dependences of distinguishability of neighbouring KNO distributions are compared in nondiffractive hadron-hadron collisions and electron-positron annihilation. (author) 19 refs.; 4 figs

  8. AI Techniques for Space: The APSI Approach

    Science.gov (United States)

    Steel, R.; Niézette, M.; Cesta, A.; Verfaille, G., Lavagna, M.; Donati, A.

    2009-05-01

    This paper will outline the framework and tools developed under the Advanced Planning and Schedule Initiative (APSI) study performed by VEGA for the European Space Agency in collaboration with three academic institutions, ISTC-CNR, ONERA, and Politecnico di Milano. We will start by illustrating the background history to APSI and why it was needed, giving a brief summary of all the partners within the project and the rolls they played within it. We will then take a closer look at what APSI actually consists of, showing the techniques that were used and detailing the framework that was developed within the scope of the project. We will follow this with an elaboration on the three demonstration test scenarios that have been developed as part of the project, illustrating the re-use and synergies between the three cases along the way. We will finally conclude with a summary of some pros and cons of the approach devised during the project and outline future directions to be further investigated and expanded on within the context of the work performed within the project.

  9. General background and approach to multibody dynamics for space applications

    Science.gov (United States)

    Santini, Paolo; Gasbarri, Paolo

    2009-06-01

    Multibody dynamics for space applications is dictated by space environment such as space-varying gravity forces, orbital and attitude perturbations, control forces if any. Several methods and formulations devoted to the modeling of flexible bodies undergoing large overall motions were developed in recent years. Most of these different formulations were aimed to face one of the main problems concerning the analysis of spacecraft dynamics namely the reduction of computer simulation time. By virtue of this, the use of symbolic manipulation, recursive formulation and parallel processing algorithms were proposed. All these approaches fall into two categories, the one based on Newton/Euler methods and the one based on Lagrangian methods; both of them have their advantages and disadvantages although in general, Newtonian approaches lend to a better understanding of the physics of problems and in particular of the magnitude of the reactions and of the corresponding structural stresses. Another important issue which must be addressed carefully in multibody space dynamics is relevant to a correct choice of kinematics variables. In fact, when dealing with flexible multibody system the resulting equations include two different types of state variables, the ones associated with large (rigid) displacements and the ones associated with elastic deformations. These two sets of variables have generally two different time scales if we think of the attitude motion of a satellite whose period of oscillation, due to the gravity gradient effects, is of the same order of magnitude as the orbital period, which is much bigger than the one associated with the structural vibration of the satellite itself. Therefore, the numerical integration of the equations of the system represents a challenging problem. This was the abstract and some of the arguments that Professor Paolo Santini intended to present for the Breakwell Lecture; unfortunately a deadly disease attacked him and shortly took him

  10. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  11. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  12. Real Space Approach to CMB deboosting

    CERN Document Server

    Yoho, Amanda; Starkman, Glenn D.; Pereira, Thiago S.

    2013-01-01

    The effect of our Galaxy's motion through the Cosmic Microwave Background rest frame, which aberrates and Doppler shifts incoming photons measured by current CMB experiments, has been shown to produce mode-mixing in the multipole space temperature coefficients. However, multipole space determinations are subject to many difficulties, and a real-space analysis can provide a straightforward alternative. In this work we describe a numerical method for removing Lorentz- boost effects from real-space temperature maps. We show that to deboost a map so that one can accurately extract the temperature power spectrum requires calculating the boost kernel at a finer pixelization than one might naively expect. In idealized cases that allow for easy comparison to analytic results, we have confirmed that there is indeed mode mixing among the spherical harmonic coefficients of the temperature. We find that using a boost kernel calculated at Nside=8192 leads to a 1% bias in the binned boosted power spectrum at l~2000, while ...

  13. Space Station overall management approach for operations

    Science.gov (United States)

    Paules, G.

    1986-01-01

    An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined.

  14. A Psychosocial Approach to Understanding Underground Spaces

    Directory of Open Access Journals (Sweden)

    Eun H. Lee

    2017-03-01

    Full Text Available With a growing need for usable land in urban areas, subterranean development has been gaining attention. While construction of large underground complexes is not a new concept, our understanding of various socio-cultural aspects of staying underground is still at a premature stage. With projected emergence of underground built environments, future populations may spend much more of their working, transit, and recreational time in underground spaces. Therefore, it is essential to understand the challenges and advantages that such environments have to improve the future welfare of users of underground spaces. The current paper discusses various psycho-social aspects of underground spaces, the impact they can have on the culture shared among the occupants, and possible solutions to overcome some of these challenges.

  15. Phase space approach to quantum dynamics

    International Nuclear Information System (INIS)

    Leboeuf, P.

    1991-03-01

    The Schroedinger equation for the time propagation of states of a quantised two-dimensional spherical phase space is replaced by the dynamics of a system of N particles lying in phase space. This is done through factorization formulae of analytic function theory arising in coherent-state representation, the 'particles' being the zeros of the quantum state. For linear Hamiltonians, like a spin in a uniform magnetic field, the motion of the particles is classical. However, non-linear terms induce interactions between the particles. Their time propagation is studied and it is shown that, contrary to integrable systems, for chaotic maps they tend to fill, as their classical counterpart, the whole phase space. (author) 13 refs., 3 figs

  16. An innovative approach to space education

    Science.gov (United States)

    Marton, Christine; Berinstain, Alain B.; Criswick, John

    1994-01-01

    At present, Canada does not have enough scientists to be competitive in the global economy, which is rapidly changing from a reliance on natural resources and industry to information and technology. Space is the final frontier and it is a multidisciplinary endeavor. It requires a knowledge of science and math, as well as non-science areas such as architecture and law. Thus, it can attract a large number of students with a diverse range of interests and career goals. An overview is presented of the space education program designed by Canadian Alumni of the International Space University (CAISU) to encourage students to pursue studies and careers in science and technology and to improve science literacy in Canada.

  17. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  18. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  19. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  20. Operator space approach to steering inequality

    International Nuclear Information System (INIS)

    Yin, Zhi; Marciniak, Marcin; Horodecki, Michał

    2015-01-01

    In Junge and Palazuelos (2011 Commun. Math. Phys. 306 695–746) and Junge et al (2010 Commun. Math. Phys. 300 715–39) the operator space theory was applied to study bipartite Bell inequalities. The aim of the paper is to follow this line of research and use the operator space technique to analyze the steering scenario. We obtain a bipartite steering functional with unbounded largest violation of steering inequality, as well as constructing all ingredients explicitly. It turns out that the unbounded largest violation is obtained by a non maximally entangled state. Moreover, we focus on the bipartite dichotomic case where we construct a steering functional with unbounded largest violation of steering inequality. This phenomenon is different to the Bell scenario where only the bounded largest violation can be obtained by any bipartite dichotomic Bell functional. (paper)

  1. A vector space approach to geometry

    CERN Document Server

    Hausner, Melvin

    2010-01-01

    The effects of geometry and linear algebra on each other receive close attention in this examination of geometry's correlation with other branches of math and science. In-depth discussions include a review of systematic geometric motivations in vector space theory and matrix theory; the use of the center of mass in geometry, with an introduction to barycentric coordinates; axiomatic development of determinants in a chapter dealing with area and volume; and a careful consideration of the particle problem. 1965 edition.

  2. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  3. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  4. Autotracking from space - The TDRSS approach

    Science.gov (United States)

    Spearing, R. E.; Harper, W. R.

    The TDRSS will provide telecommunications support to near-earth orbiting satellites through the 1980s and into the 1990s. The system incorporates two operational satellites at geostationary altitude and a single ground station at White Sands, NM. Of the many tasks facing the engineering team in development of this system, one of the most challenging was K-band autotrack. An approach not previously attempted placed the error detection, processing, and feedback elements for automatic control of the TDR satellite antennas on the ground. This approach offered several advantages to the designers but posed a number of interesting questions during the development program. The autotrack system design and its test program are described with emphasis given to areas of special interest in developing a working K-band service.

  5. Approaching space-time through velocity in doubly special relativity

    International Nuclear Information System (INIS)

    Aloisio, R.; Galante, A.; Grillo, A.F.; Luzio, E.; Mendez, F.

    2004-01-01

    We discuss the definition of velocity as dE/d vertical bar p vertical bar, where E, p are the energy and momentum of a particle, in doubly special relativity (DSR). If this definition matches dx/dt appropriate for the space-time sector, then space-time can in principle be built consistently with the existence of an invariant length scale. We show that, within different possible velocity definitions, a space-time compatible with momentum-space DSR principles cannot be derived

  6. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  7. The +vbar breakout during approach to Space Station Freedom

    Science.gov (United States)

    Dunham, Scott D.

    1993-01-01

    A set of burn profiles was developed to provide bounding jet firing histories for a +vbar breakout during approaches to Space Station Freedom. The delta-v sequences were designed to place the Orbiter on a safe trajectory under worst case conditions and to try to minimize plume impingement on Space Station Freedom structure.

  8. Approach to developing reliable space reactor power systems

    International Nuclear Information System (INIS)

    Mondt, J.F.; Shinbrot, C.H.

    1991-01-01

    The Space Reactor Power System Project is in the engineering development phase of a three-phase program. During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described in this paper along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top down systems approach which includes a point design based on a detailed technical specification of a 100 kW power system

  9. Approaches to radiation guidelines for space travel

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    1984-01-01

    There are obvious risks in space travel that have loomed larger than any risk from radiation. Nevertheless, NASA has maintained a radiation program that has involved maintenance of records of radiation exposure, and planning so that the astronauts' exposures are kept as low as possible, and not just within the current guidelines. These guidelines are being reexamined currently by NCRP Committee 75 because new information is available, for example, risk estimates for radiation-induced cancer and about the effects of HZE particles. Furthermore, no estimates of risk or recommendations were made for women in 1970 and must now be considered. The current career limit is 400 rem. The appropriateness of this limit and its basis are being examined as well as the limits for specific organs. There is now considerably more information about age-dependency for radiation and this will be taken into account. Work has been carried out on the so-called microlesions caused by HZE particles and on the relative carcinogenic effect of heavy ions, including iron. A remaining question is whether the fluence of HZE particles could reach levels of concern in missions under consideration. Finally, it is the intention of the committee to indicate clearly the areas requiring further research. 21 references, 1 figure, 7 tables

  10. Approaches to radiation guidelines for space travel

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    1984-01-01

    There are obvious risks in space travel that have loomed larger than any risk from radiation. Nevertheless, NASA has maintained a radiation program that has involved maintenance of records of radiation exposure, and planning so that the astronauts' exposures are kept as low as possible, and not just within the current guidelines. These guidelines are being reexamined currently by NCRP Committee 75 because new information is available, for example, risk estimates for radiation-induced cancer and about the effects of HZE particles. The current career limit is 400 rem to the blood forming organs. The appropriateness of this limit and its basis are being examined as well as the limits for specific organs. There is now considerably more information about age-dependency for radiation effects and this will be taken into account. In 1973 a committee of the National Research Council made a separate study of HZE particle effects and it was concluded that the attendant risks did not pose a hazard for low inclination near-earth orbit missions. Since that time work has been carried out on the so-called microlesions caused by HZE particles and on the relative carcinogenic effect of heavy ions, including iron. A remaining question is whether the fluence of HZE particles could reach levels of concern in missions under consideration. Finally, it is the intention of the committee to indicate clearly the areas requiring further research. 26 references, 1 figure, 7 tables

  11. Space Weather in the Machine Learning Era: A Multidisciplinary Approach

    Science.gov (United States)

    Camporeale, E.; Wing, S.; Johnson, J.; Jackman, C. M.; McGranaghan, R.

    2018-01-01

    The workshop entitled Space Weather: A Multidisciplinary Approach took place at the Lorentz Center, University of Leiden, Netherlands, on 25-29 September 2017. The aim of this workshop was to bring together members of the Space Weather, Mathematics, Statistics, and Computer Science communities to address the use of advanced techniques such as Machine Learning, Information Theory, and Deep Learning, to better understand the Sun-Earth system and to improve space weather forecasting. Although individual efforts have been made toward this goal, the community consensus is that establishing interdisciplinary collaborations is the most promising strategy for fully utilizing the potential of these advanced techniques in solving Space Weather-related problems.

  12. Toward a global space exploration program: A stepping stone approach

    Science.gov (United States)

    Ehrenfreund, Pascale; McKay, Chris; Rummel, John D.; Foing, Bernard H.; Neal, Clive R.; Masson-Zwaan, Tanja; Ansdell, Megan; Peter, Nicolas; Zarnecki, John; Mackwell, Steve; Perino, Maria Antionetta; Billings, Linda; Mankins, John; Race, Margaret

    2012-01-01

    In response to the growing importance of space exploration in future planning, the Committee on Space Research (COSPAR) Panel on Exploration (PEX) was chartered to provide independent scientific advice to support the development of exploration programs and to safeguard the potential scientific assets of solar system objects. In this report, PEX elaborates a stepwise approach to achieve a new level of space cooperation that can help develop world-wide capabilities in space science and exploration and support a transition that will lead to a global space exploration program. The proposed stepping stones are intended to transcend cross-cultural barriers, leading to the development of technical interfaces and shared legal frameworks and fostering coordination and cooperation on a broad front. Input for this report was drawn from expertise provided by COSPAR Associates within the international community and via the contacts they maintain in various scientific entities. The report provides a summary and synthesis of science roadmaps and recommendations for planetary exploration produced by many national and international working groups, aiming to encourage and exploit synergies among similar programs. While science and technology represent the core and, often, the drivers for space exploration, several other disciplines and their stakeholders (Earth science, space law, and others) should be more robustly interlinked and involved than they have been to date. The report argues that a shared vision is crucial to this linkage, and to providing a direction that enables new countries and stakeholders to join and engage in the overall space exploration effort. Building a basic space technology capacity within a wider range of countries, ensuring new actors in space act responsibly, and increasing public awareness and engagement are concrete steps that can provide a broader interest in space exploration, worldwide, and build a solid basis for program sustainability. By engaging

  13. a Web Service Approach for Linking Sensors and Cellular Spaces

    Science.gov (United States)

    Isikdag, U.

    2013-09-01

    More and more devices are starting to be connected to the Internet. In the future the Internet will not only be a communication medium for people, it will in fact be a communication environment for devices. The connected devices which are also referred as Things will have an ability to interact with other devices over the Internet, i.) provide information in interoperable form and ii.) consume /utilize such information with the help of sensors embedded in them. This overall concept is known as Internet-of- Things (IoT). This requires new approaches to be investigated for system architectures to establish relations between spaces and sensors. The research presented in this paper elaborates on an architecture developed with this aim, i.e. linking spaces and sensors using a RESTful approach. The objective is making spaces aware of (sensor-embedded) devices, and making devices aware of spaces in a loosely coupled way (i.e. a state/usage/function change in the spaces would not have effect on sensors, similarly a location/state/usage/function change in sensors would not have any effect on spaces). The proposed architecture also enables the automatic assignment of sensors to spaces depending on space geometry and sensor location.

  14. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  15. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  16. Generalized Wigner functions in curved spaces: A new approach

    International Nuclear Information System (INIS)

    Kandrup, H.E.

    1988-01-01

    It is well known that, given a quantum field in Minkowski space, one can define Wigner functions f/sub W//sup N/(x 1 ,p 1 ,...,x/sub N/,p/sub N/) which (a) are convenient to analyze since, unlike the field itself, they are c-number quantities and (b) can be interpreted in a limited sense as ''quantum distribution functions.'' Recently, Winter and Calzetta, Habib and Hu have shown one way in which these flat-space Wigner functions can be generalized to a curved-space setting, deriving thereby approximate kinetic equations which make sense ''quasilocally'' for ''short-wavelength modes.'' This paper suggests a completely orthogonal approach for defining curved-space Wigner functions which generalizes instead an object such as the Fourier-transformed f/sub W/ 1 (k,p), which is effectively a two-point function viewed in terms of the ''natural'' creation and annihilation operators a/sup dagger/(p-(12k) and a(p+(12k). The approach suggested here lacks the precise phase-space interpretation implicit in the approach of Winter or Calzetta, Habib, and Hu, but it is useful in that (a) it is geared to handle any ''natural'' mode decomposition, so that (b) it can facilitate exact calculations at least in certain limits, such as for a source-free linear field in a static spacetime

  17. A simple coordinate space approach to three-body problems ...

    Indian Academy of Sciences (India)

    We show how to treat the dynamics of an asymmetric three-body system consisting of one heavy and two identical light particles in a simple coordinate space variational approach. The method is constructive and gives an efficient way of resolving a three-body system to an effective two-body system. It is illustrated by ...

  18. Bayesian LASSO, scale space and decision making in association genetics.

    Science.gov (United States)

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  19. A Proposal for the Common Safety Approach of Space Programs

    Science.gov (United States)

    Grimard, Max

    2002-01-01

    For all applications, business and systems related to Space programs, Quality is mandatory and is a key factor for the technical as well as the economical performances. Up to now the differences of applications (launchers, manned space-flight, sciences, telecommunications, Earth observation, planetary exploration, etc.) and the difference of technical culture and background of the leading countries (USA, Russia, Europe) have generally led to different approaches in terms of standards and processes for Quality. At a time where international cooperation is quite usual for the institutional programs and globalization is the key word for the commercial business, it is considered of prime importance to aim at common standards and approaches for Quality in Space Programs. For that reason, the International Academy of Astronautics has set up a Study Group which mandate is to "Make recommendations to improve the Quality, Reliability, Efficiency, and Safety of space programmes, taking into account the overall environment in which they operate : economical constraints, harsh environments, space weather, long life, no maintenance, autonomy, international co-operation, norms and standards, certification." The paper will introduce the activities of this Study Group, describing a first list of topics which should be addressed : Through this paper it is expected to open the discussion to update/enlarge this list of topics and to call for contributors to this Study Group.

  20. Approach to an Affordable and Sustainable Space Transportation System

    Science.gov (United States)

    McCleskey, Caey M.; Rhodes, R. E.; Robinson, J. W.; Henderson, E. M.

    2012-01-01

    This paper describes an approach and a general procedure for creating space transportation architectural concepts that are at once affordable and sustainable. Previous papers by the authors and other members of the Space Propulsion Synergy Team (SPST) focused on a functional system breakdown structure for an architecture and definition of high-payoff design techniques with a technology integration strategy. This paper follows up by using a structured process that derives architectural solutions focused on achieving life cycle affordability and sustainability. Further, the paper includes an example concept that integrates key design techniques discussed in previous papers. !

  1. Requirements and approach for a space tourism launch system

    Science.gov (United States)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  2. Space Power Free-Piston Stirling Engine Scaling Study

    Science.gov (United States)

    Jones, D.

    1989-01-01

    The design feasibility study is documented of a single cylinder, free piston Stirling engine/linear alternator (FPSE/LA) power module generating 150 kW-electric (kW sub e), and the determination of the module's maximum feasible power level. The power module configuration was specified to be a single cylinder (single piston, single displacer) FPSE/LA, with tuning capacitors if required. The design requirements were as follows: (1) Maximum electrical power output; (2) Power module thermal efficiency equal to or greater than 20 percent at a specific mass of 5 to 8 kg/kW(sub e); (3) Heater wall temperature/cooler wall temperature = 1050 K/525 K; (4) Sodium heat-pipe heat transport system, pumped loop NaK (sodium-potassium eutectic mixture) rejection system; (5) Maximum power module vibration amplitude = 0.0038 cm; and (6) Design life = 7 years (60,000 hr). The results show that a single cylinder FPSE/LA is capable of meeting program goals and has attractive scaling attributes over the power range from 25 to 150 kW(sub e). Scaling beyond the 150 kW(sub e) power level, the power module efficiency falls and the power module specific mass reaches 10 kg/kW(sub e) at a power output of 500 kW(sub e). A discussion of scaling rules for the engine, alternator, and heat transport systems is presented, along with a detailed description of the conceptual design of a 150 kW(sub e) power module that meets the requirements. Included is a discussion of the design of a dynamic balance system. A parametric study of power module performance conducted over the power output range of 25 to 150 kW(sub e) for temperature ratios of 1.7, 2.0, 2.5, and 3.0 is presented and discussed. The results show that as the temperature ratio decreases, the efficiency falls and specific mass increases. At a temperature ratio of 1.7, the 150 kW(sub e) power module cannot satisfy both efficiency and specific mass goals. As the power level increases from 25 to 150 kW(sub e) at a fixed temperature ratio, power

  3. Approach to transaction management for Space Station Freedom

    Science.gov (United States)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1990-01-01

    The Space Station Freedom Manned Base (SSFMB) will support the operation of the many payloads that may be located within the pressurized modules or on external attachment points. The transaction management (TM) approach presented provides a set of overlapping features that will assure the effective and safe operation of the SSFMB and provide a schedule that makes potentially hazardous operations safe, allocates resources within the capability of the resource providers, and maintains an environment conducive to the operations planned. This approach provides for targets of opportunity and schedule adjustments that give the operators the flexibility to conduct a vast majority of their operations with no conscious involvement with the TM function.

  4. State space approach to mixed boundary value problems.

    Science.gov (United States)

    Chen, C. F.; Chen, M. M.

    1973-01-01

    A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.

  5. Approach to transaction management for Space Station Freedom

    Science.gov (United States)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1989-01-01

    An approach to managing the operations of the Space Station Freedom based on their external effects is described. It is assumed that there is a conflict-free schedule that, if followed, will allow only appropriate operations to occur. The problem is then reduced to that of ensuring that the operations initiated are within the limits allowed by the schedule, or that the external effects of such operations are within those allowed by the schedule. The main features of the currently adopted transaction management approach are discussed.

  6. A real-space stochastic density matrix approach for density functional electronic structure.

    Science.gov (United States)

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  7. Scaling laws for trace impurity confinement: a variational approach

    International Nuclear Information System (INIS)

    Thyagaraja, A.; Haas, F.A.

    1990-01-01

    A variational approach is outlined for the deduction of impurity confinement scaling laws. Given the forms of the diffusive and convective components to the impurity particle flux, we present a variational principle for the impurity confinement time in terms of the diffusion time scale and the convection parameter, which is a non-dimensional measure of the size of the convective flux relative to the diffusive flux. These results are very general and apply irrespective of whether the transport fluxes are of theoretical or empirical origin. The impurity confinement time scales exponentially with the convection parameter in cases of practical interest. (orig.)

  8. New approximation of a scale space kernel on SE(3) and applications in neuroimaging

    NARCIS (Netherlands)

    Portegies, J.M.; Sanguinetti, G.R.; Meesters, S.P.L.; Duits, R.

    2015-01-01

    We provide a new, analytic kernel for scale space filtering of dMRI data. The kernel is an approximation for the Green's function of a hypo-elliptic diffusion on the 3D rigid body motion group SE(3), for fiber enhancement in dMRI. The enhancements are described by linear scale space PDEs in the

  9. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  10. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  11. The Dynameomics Entropy Dictionary: A Large-Scale Assessment of Conformational Entropy across Protein Fold Space.

    Science.gov (United States)

    Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie

    2017-04-27

    Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.

  12. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    Science.gov (United States)

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  13. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  14. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    Science.gov (United States)

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Conceptual design of jewellery: a space-based aesthetics approach

    Directory of Open Access Journals (Sweden)

    Tzintzi Vaia

    2017-01-01

    Full Text Available Conceptual design is a field that offers various aesthetic approaches to generation of nature-based product design concepts. Essentially, Conceptual Product Design (CPD uses similarities based on the geometrical forms and functionalities. Furthermore, the CAD-based freehand sketch is a primary conceptual tool in the early stages of the design process. The proposed Conceptual Product Design concept is dealing with jewelleries that are inspired from space. Specifically, a number of galaxy features, such as galaxy shapes, wormholes and graphical representation of planet magnetic field are used as inspirations. Those space-based design ideas at a conceptual level can lead to further opportunities for research and economic success of the jewellery industry. A number of illustrative case studies are presented and new opportunities can be derived for economic success.

  16. Implementing CDIO Approach in preparing engineers for Space Industry

    Directory of Open Access Journals (Sweden)

    Daneykin Yury

    2017-01-01

    Full Text Available The necessity to train highly qualified specialists leads to the development of the trajectory that can allow training specialists for the space industry. Several steps have been undertaken to reach this purpose. First, the University founded the Space Instrument Design Center that promotes a wide range of initiatives in the sphere of educating specialists, retraining specialists, carrying out research and collaborating with profiled enterprises. The University introduced Elite Engineering Education system to attract talented specialist and help them to follow individual trajectory to train unique specialist. The paper discusses the targets necessary for achievement to train specialists. Moreover, the paper presents the compliance of the attempts with the CDIO Approach, which is widely used in leading universities to improve engineering programs.

  17. A Principled Approach to the Specification of System Architectures for Space Missions

    Science.gov (United States)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  18. Hybrid x-space: a new approach for MPI reconstruction.

    Science.gov (United States)

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  19. Space-coiling fractal metamaterial with multi-bandgaps on subwavelength scale

    Science.gov (United States)

    Man, Xianfeng; Liu, Tingting; Xia, Baizhan; Luo, Zhen; Xie, Longxiang; Liu, Jian

    2018-06-01

    Acoustic metamaterials are remarkably different from conventional materials, as they can flexibly manipulate and control the propagation of sound waves. Unlike the locally resonant metamaterials introduced in earlier studies, we designed an ultraslow artificial structure with a sound speed much lower than that in air. In this paper, the space-coiling approach is proposed for achieving artificial metamaterial for extremely low-frequency airborne sound. In addition, the self-similar fractal technique is utilized for designing space-coiling Mie-resonance-based metamaterials (MRMMs) to obtain a band-dispersive spectrum. The band structures of two-dimensional (2D) acoustic metamaterials with different fractal levels are illustrated using the finite element method. The low-frequency bandgap can easily be formed, and multi-bandgap properties are observed in high-level fractals. Furthermore, the designed MRMMs with higher order fractal space coiling shows a good robustness against irregular arrangement. Besides, the proposed artificial structure was found to modify and control the radiation field arbitrarily. Thus, this work provides useful guidelines for the design of acoustic filtering devices and acoustic wavefront shaping applications on the subwavelength scale.

  20. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  1. Nuclear disassembly time scales using space time correlations

    Energy Technology Data Exchange (ETDEWEB)

    Durand, D.; Colin, J.; Lecolley, J.F.; Meslin, C.; Aboufirassi, M.; Bougault, R.; Brou, R. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Bilwes, B.; Cosmo, F. [Strasbourg-1 Univ., 67 (France); Galin, J. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); and others

    1996-09-01

    The lifetime, {tau}, with respect to multifragmentation of highly excited nuclei is deduced from the analysis of strongly damped Pb+Au collisions at 29 MeV/u. The method is based on the study of space-time correlations induced by `proximity` effects between fragments emitted by the two primary products of the reaction and gives the time between the re-separation of the two primary products and the subsequent multifragment decay of one partner. (author). 2 refs.

  2. Nuclear disassembly time scales using space time correlations

    International Nuclear Information System (INIS)

    Durand, D.; Colin, J.; Lecolley, J.F.; Meslin, C.; Aboufirassi, M.; Bougault, R.; Brou, R.; Galin, J.; and others.

    1996-01-01

    The lifetime, τ, with respect to multifragmentation of highly excited nuclei is deduced from the analysis of strongly damped Pb+Au collisions at 29 MeV/u. The method is based on the study of space-time correlations induced by 'proximity' effects between fragments emitted by the two primary products of the reaction and gives the time between the re-separation of the two primary products and the subsequent multifragment decay of one partner. (author)

  3. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    Science.gov (United States)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help

  4. Toward multi-scale simulation of reconnection phenomena in space plasma

    Science.gov (United States)

    Den, M.; Horiuchi, R.; Usami, S.; Tanaka, T.; Ogawa, T.; Ohtani, H.

    2013-12-01

    Magnetic reconnection is considered to play an important role in space phenomena such as substorm in the Earth's magnetosphere. It is well known that magnetic reconnection is controlled by microscopic kinetic mechanism. Frozen-in condition is broken due to particle kinetic effects and collisionless reconnection is triggered when current sheet is compressed as thin as ion kinetic scales under the influence of external driving flow. On the other hand configuration of the magnetic field leading to formation of diffusion region is determined in macroscopic scale and topological change after reconnection is also expressed in macroscopic scale. Thus magnetic reconnection is typical multi-scale phenomenon and microscopic and macroscopic physics are strongly coupled. Recently Horiuchi et al. developed an effective resistivity model based on particle-in-cell (PIC) simulation results obtained in study of collisionless driven reconnection and applied to a global magnetohydrodynamics (MHD) simulation of substorm in the Earth's magnetosphere. They showed reproduction of global behavior in substrom such as dipolarization and flux rope formation by global three dimensional MHD simulation. Usami et al. developed multi-hierarchy simulation model, in which macroscopic and microscopic physics are solved self-consistently and simultaneously. Based on the domain decomposition method, this model consists of three parts: a MHD algorithm for macroscopic global dynamics, a PIC algorithm for microscopic kinetic physics, and an interface algorithm to interlock macro and micro hierarchies. They verified the interface algorithm by simulation of plasma injection flow. In their latest work, this model was applied to collisionless reconnection in an open system and magnetic reconnection was successfully found. In this paper, we describe our approach to clarify multi-scale phenomena and report the current status. Our recent study about extension of the MHD domain to global system is presented. We

  5. Urban green spaces assessment approach to health, safety and environment

    Directory of Open Access Journals (Sweden)

    B. Akbari Neisiani

    2016-04-01

    Full Text Available The city is alive with dynamic systems, where parks and urban green spaces have high strategic importance which help to improve living conditions. Urban parks are used as visual landscape with so many benefits such as reducing stress, reducing air pollution and producing oxygen, creating opportunities for people to participate in physical activities, optimal environment for children and decreasing noise pollution. The importance of parks is such extent that are discussed as an indicator of urban development. Hereupon the design and maintenance of urban green spaces requires integrated management system based on international standards of health, safety and the environment. In this study, Nezami Ganjavi Park (District 6 of Tehran with the approach to integrated management systems have been analyzed. In order to identify the status of the park in terms of the requirements of the management system based on previous studies and all Tehran Municipality’s considerations, a check list has been prepared and completed by park survey and interview with green space experts. The results showed that the utility of health indicators were 92.33 % (the highest and environmental and safety indicators were 72 %, 84 % respectively. According to SWOT analysis in Nezami Ganjavi Park some of strength points are fire extinguishers, first aid box, annual testing of drinking water and important weakness is using unseparated trash bins also as an opportunities, there are some interesting factors for children and parents to spend free times. Finally, the most important threat is unsuitable park facilities for disabled.

  6. A Mellin space approach to the conformal bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Gopakumar, Rajesh [International Centre for Theoretical Sciences (ICTS-TIFR),Survey No. 151, Shivakote, Hesaraghatta Hobli, Bangalore North 560 089 (India); Kaviraj, Apratim [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India); Sen, Kallol [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India); Kavli Institute for the Physics and Mathematics of the Universe (WPI),The University of Tokyo Institutes for Advanced Study, Kashiwa, Chiba 277-8583 (Japan); Sinha, Aninda [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India)

    2017-05-05

    We describe in more detail our approach to the conformal bootstrap which uses the Mellin representation of CFT{sub d} four point functions and expands them in terms of crossing symmetric combinations of AdS{sub d+1} Witten exchange functions. We consider arbitrary external scalar operators and set up the conditions for consistency with the operator product expansion. Namely, we demand cancellation of spurious powers (of the cross ratios, in position space) which translate into spurious poles in Mellin space. We discuss two contexts in which we can immediately apply this method by imposing the simplest set of constraint equations. The first is the epsilon expansion. We mostly focus on the Wilson-Fisher fixed point as studied in an epsilon expansion about d=4. We reproduce Feynman diagram results for operator dimensions to O(ϵ{sup 3}) rather straightforwardly. This approach also yields new analytic predictions for OPE coefficients to the same order which fit nicely with recent numerical estimates for the Ising model (at ϵ=1). We will also mention some leading order results for scalar theories near three and six dimensions. The second context is a large spin expansion, in any dimension, where we are able to reproduce and go a bit beyond some of the results recently obtained using the (double) light cone expansion. We also have a preliminary discussion about numerical implementation of the above bootstrap scheme in the absence of a small parameter.

  7. A phase space approach to wave propagation with dispersion.

    Science.gov (United States)

    Ben-Benjamin, Jonathan S; Cohen, Leon; Loughlin, Patrick J

    2015-08-01

    A phase space approximation method for linear dispersive wave propagation with arbitrary initial conditions is developed. The results expand on a previous approximation in terms of the Wigner distribution of a single mode. In contrast to this previously considered single-mode case, the approximation presented here is for the full wave and is obtained by a different approach. This solution requires one to obtain (i) the initial modal functions from the given initial wave, and (ii) the initial cross-Wigner distribution between different modal functions. The full wave is the sum of modal functions. The approximation is obtained for general linear wave equations by transforming the equations to phase space, and then solving in the new domain. It is shown that each modal function of the wave satisfies a Schrödinger-type equation where the equivalent "Hamiltonian" operator is the dispersion relation corresponding to the mode and where the wavenumber is replaced by the wavenumber operator. Application to the beam equation is considered to illustrate the approach.

  8. Comparison of two Minkowski-space approaches to heavy quarkonia

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, Sofia; Biernat, Elmar P. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Li, Yang [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); College of William and Mary, Department of Physics, Williamsburg, VA (United States); Maris, Pieter; Vary, James P. [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); Pena, M.T. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Lisboa, Departamento de Fisica, Instituto Superior Tecnico, Lisbon (Portugal); Stadler, Alfred [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Evora, Departamento de Fisica, Evora (Portugal)

    2017-10-15

    In this work we compare mass spectra and decay constants obtained from two recent, independent, and fully relativistic approaches to the quarkonium bound-state problem: the Basis Light-Front Quantization approach, where light-front wave functions are naturally formulated; and, the Covariant Spectator Theory (CST), based on a reorganization of the Bethe-Salpeter equation. Even though conceptually different, both solutions are obtained in Minkowski space. Comparisons of decay constants for more than ten states of charmonium and bottomonium show favorable agreement between the two approaches as well as with experiment where available. We also apply the Brodsky-Huang-Lepage prescription to convert the CST amplitudes into functions of light-front variables. This provides an ideal opportunity to investigate the similarities and differences at the level of the wave functions. Several qualitative features are observed in remarkable agreement between the two approaches even for the rarely addressed excited states. Leading-twist distribution amplitudes as well as parton distribution functions of heavy quarkonia are also analyzed. (orig.)

  9. Phase space properties of local observables and structure of scaling limits

    International Nuclear Information System (INIS)

    Buchholz, D.

    1995-05-01

    For any given algebra of local observables in relativistic quantum field theory there exists an associated scaling algebra which permits one to introduce renormalization group transformations and to construct the scaling (short distance) limit of the theory. On the basis of this result it is discussed how the phase space properties of a theory determine the structure of its scaling limit. Bounds on the number of local degrees of freedom appearing in the scaling limit are given which allow one to distinguish between theories with classical and quantum scaling limits. The results can also be used to establish physically significant algebraic properties of the scaling limit theories, such as the split property. (orig.)

  10. Space Culture: Innovative Cultural Approaches To Public Engagement With Astronomy, Space Science And Astronautics

    Science.gov (United States)

    Malina, Roger F.

    2012-01-01

    In recent years a number of cultural organizations have established ongoing programs of public engagement with astronomy, space science and astronautics. Many involve elements of citizen science initiatives, artists’ residencies in scientific laboratories and agencies, art and science festivals, and social network projects as well as more traditional exhibition venues. Recognizing these programs several agencies and organizations have established mechanisms for facilitating public engagement with astronomy and space science through cultural activities. The International Astronautics Federation has established an Technical Activities Committee for the Cultural Utilization of Space. Over the past year the NSF and NEA have organized disciplinary workshops to develop recommendations relating to art-science interaction and community building efforts. Rationales for encouraging public engagement via cultural projects range from theory of creativity, innovation and invention to cultural appropriation in the context of `socially robust science’ as advocated by Helga Nowotny of the European Research Council. Public engagement with science, as opposed to science education and outreach initiatives, require different approaches. Just as organizations have employed education professionals to lead education activities, so they must employ cultural professionals if they wish to develop public engagement projects via arts and culture. One outcome of the NSF and NEA workshops has been development of a rationale for converting STEM to STEAM by including the arts in STEM methodologies, particularly for K-12 where students can access science via arts and cultural contexts. Often these require new kinds of informal education approaches that exploit locative media, gaming platforms, artists projects and citizen science. Incorporating astronomy and space science content in art and cultural projects requires new skills in `cultural translation’ and `trans-mediation’ and new kinds

  11. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  12. Geo-spatial Cognition on Human's Social Activity Space Based on Multi-scale Grids

    Directory of Open Access Journals (Sweden)

    ZHAI Weixin

    2016-12-01

    Full Text Available Widely applied location aware devices, including mobile phones and GPS receivers, have provided great convenience for collecting large volume individuals' geographical information. The researches on the human's society behavior space has attracts an increasingly number of researchers. In our research, based on location-based Flickr data From 2004 to May, 2014 in China, we choose five levels of spatial grids to form the multi-scale frame for investigate the correlation between the scale and the geo-spatial cognition on human's social activity space. The HT-index is selected as the fractal inspired by Alexander to estimate the maturity of the society activity on different scales. The results indicate that that the scale characteristics are related to the spatial cognition to a certain extent. It is favorable to use the spatial grid as a tool to control scales for geo-spatial cognition on human's social activity space.

  13. Understanding space weather with new physical, mathematical and philosophical approaches

    Science.gov (United States)

    Mateev, Lachezar; Velinov, Peter; Tassev, Yordan

    2016-07-01

    The actual problems of solar-terrestrial physics, in particular of space weather are related to the prediction of the space environment state and are solved by means of different analyses and models. The development of these investigations can be considered also from another side. This is the philosophical and mathematical approach towards this physical reality. What does it constitute? We have a set of physical processes which occur in the Sun and interplanetary space. All these processes interact with each other and simultaneously participate in the general process which forms the space weather. Let us now consider the Leibniz's monads (G.W. von Leibniz, 1714, Monadologie, Wien; Id., 1710, Théodicée, Amsterdam) and use some of their properties. There are total 90 theses for monads in the Leibniz's work (1714), f.e. "(1) The Monad, of which we shall here speak, is nothing but a simple substance, which enters into compounds. By 'simple' is meant 'without parts'. (Theod. 10.); … (56) Now this connexion or adaptation of all created things to each and of each to all, means that each simple substance has relations which express all the others, and, consequently, that it is a perpetual living mirror of the universe. (Theod. 130, 360.); (59) … this universal harmony, according to which every substance exactly expresses all others through the relations it has with them. (63) … every Monad is, in its own way, a mirror of the universe, and the universe is ruled according to a perfect order. (Theod. 403.)", etc. Let us introduce in the properties of monads instead of the word "monad" the word "process". We obtain the following statement: Each process reflects all other processes and all other processes reflect this process. This analogy is not formal at all, it reflects accurately the relation between the physical processes and their unity. The category monad which in the Leibniz's Monadology reflects generally the philosophical sense is fully identical with the

  14. Extension of Space Food Shelf Life Through Hurdle Approach

    Science.gov (United States)

    Cooper, M. R.; Sirmons, T. A.; Froio-Blumsack, D.; Mohr, L.; Young, M.; Douglas, G. L.

    2018-01-01

    The processed and prepackaged space food system is the main source of crew nutrition, and hence central to astronaut health and performance. Unfortunately, space food quality and nutrition degrade to unacceptable levels in two to three years with current food stabilization technologies. Future exploration missions will require a food system that remains safe, acceptable and nutritious through five years of storage within vehicle resource constraints. The potential of stabilization technologies (alternative storage temperatures, processing, formulation, ingredient source, packaging, and preparation procedures), when combined in hurdle approach, to mitigate quality and nutritional degradation is being assessed. Sixteen representative foods from the International Space Station food system were chosen for production and analysis and will be evaluated initially and at one, three, and five years with potential for analysis at seven years if necessary. Analysis includes changes in color, texture, nutrition, sensory quality, and rehydration ratio when applicable. The food samples will be stored at -20 C, 4 C, and 21 C. Select food samples will also be evaluated at -80 C to determine the impacts of ultra-cold storage after one and five years. Packaging film barrier properties and mechanical integrity will be assessed before and after processing and storage. At the study conclusion, if tested hurdles are adequate, formulation, processing, and storage combinations will be uniquely identified for processed food matrices to achieve a five-year shelf life. This study will provide one of the most comprehensive investigations of long duration food stability ever completed, and the achievement of extended food system stability will have profound impacts to health and performance for spaceflight crews and for relief efforts and military applications on Earth.

  15. Space nuclear reactor system diagnosis: Knowledge-based approach

    International Nuclear Information System (INIS)

    Ting, Y.T.D.

    1990-01-01

    SP-100 space nuclear reactor system development is a joint effort by the Department of Energy, the Department of Defense and the National Aeronautics and Space Administration. The system is designed to operate in isolation for many years, and is possibly subject to little or no remote maintenance. This dissertation proposes a knowledge based diagnostic system which, in principle, can diagnose the faults which can either cause reactor shutdown or lead to another serious problem. This framework in general can be applied to the fully specified system if detailed design information becomes available. The set of faults considered herein is identified based on heuristic knowledge about the system operation. The suitable approach to diagnostic problem solving is proposed after investigating the most prevalent methodologies in Artificial Intelligence as well as the causal analysis of the system. Deep causal knowledge modeling based on digraph, fault-tree or logic flowgraph methodology would present a need for some knowledge representation to handle the time dependent system behavior. A proposed qualitative temporal knowledge modeling methodology, using rules with specified time delay among the process variables, has been proposed and is used to develop the diagnostic sufficient rule set. The rule set has been modified by using a time zone approach to have a robust system design. The sufficient rule set is transformed to a sufficient and necessary one by searching the whole knowledge base. Qualitative data analysis is proposed in analyzing the measured data if in a real time situation. An expert system shell - Intelligence Compiler is used to develop the prototype system. Frames are used for the process variables. Forward chaining rules are used in monitoring and backward chaining rules are used in diagnosis

  16. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  17. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  18. Human Space Exploration and Human Space Flight: Latency and the Cognitive Scale of the Universe

    Science.gov (United States)

    Lester, Dan; Thronson, Harley

    2011-01-01

    The role of telerobotics in space exploration as placing human cognition on other worlds is limited almost entirely by the speed of light, and the consequent communications latency that results from large distances. This latency is the time delay between the human brain at one end, and the telerobotic effector and sensor at the other end. While telerobotics and virtual presence is a technology that is rapidly becoming more sophisticated, with strong commercial interest on the Earth, this time delay, along with the neurological timescale of a human being, quantitatively defines the cognitive horizon for any locale in space. That is, how distant can an operator be from a robot and not be significantly impacted by latency? We explore that cognitive timescale of the universe, and consider the implications for telerobotics, human space flight, and participation by larger numbers of people in space exploration. We conclude that, with advanced telepresence, sophisticated robots could be operated with high cognition throughout a lunar hemisphere by astronauts within a station at an Earth-Moon Ll or L2 venue. Likewise, complex telerobotic servicing of satellites in geosynchronous orbit can be carried out from suitable terrestrial stations.

  19. Two-scale approach to oscillatory singularly perturbed transport equations

    CERN Document Server

    Frénod, Emmanuel

    2017-01-01

    This book presents the classical results of the two-scale convergence theory and explains – using several figures – why it works. It then shows how to use this theory to homogenize ordinary differential equations with oscillating coefficients as well as oscillatory singularly perturbed ordinary differential equations. In addition, it explores the homogenization of hyperbolic partial differential equations with oscillating coefficients and linear oscillatory singularly perturbed hyperbolic partial differential equations. Further, it introduces readers to the two-scale numerical methods that can be built from the previous approaches to solve oscillatory singularly perturbed transport equations (ODE and hyperbolic PDE) and demonstrates how they can be used efficiently. This book appeals to master’s and PhD students interested in homogenization and numerics, as well as to the Iter community.

  20. Conceptual Design and Demonstration of Space Scale for Measuring Mass in Microgravity Environment

    Directory of Open Access Journals (Sweden)

    Youn-Kyu Kim

    2015-12-01

    Full Text Available In this study, a new idea for developing a space scale for measuring mass in a microgravity environment was proposed by using the inertial force properties of an object to measure its mass. The space scale detected the momentum change of the specimen and reference masses by using a load-cell sensor as the force transducer based on Newton’s laws of motion. In addition, the space scale calculated the specimen mass by comparing the inertial forces of the specimen and reference masses in the same acceleration field. By using this concept, a space scale with a capacity of 3 kg based on the law of momentum conservation was implemented and demonstrated under microgravity conditions onboard International Space Station (ISS with an accuracy of ±1 g. By the performance analysis on the space scale, it was verified that an instrument with a compact size could be implemented and be quickly measured with a reasonable accuracy under microgravity conditions.

  1. Zebrafish brain mapping--standardized spaces, length scales, and the power of N and n.

    Science.gov (United States)

    Hunter, Paul R; Hendry, Aenea C; Lowe, Andrew S

    2015-06-01

    Mapping anatomical and functional parameters of the zebrafish brain is moving apace. Research communities undertaking such studies are becoming ever larger and more diverse. The unique features, tools, and technologies associated with zebrafish are propelling them as the 21st century model organism for brain mapping. Uniquely positioned as a vertebrate model system, the zebrafish enables imaging of anatomy and function at different length scales from intraneuronal compartments to sparsely distributed whole brain patterns. With a variety of diverse and established statistical modeling and analytic methods available from the wider brain mapping communities, the richness of zebrafish neuroimaging data is being realized. The statistical power of population observations (N) within and across many samples (n) projected onto a standardized space will provide vast databases for data-driven biological approaches. This article reviews key brain mapping initiatives at different levels of scale that highlight the potential of zebrafish brain mapping. By way of introduction to the next wave of brain mappers, an accessible introduction to the key concepts and caveats associated with neuroimaging are outlined and discussed. © 2014 Wiley Periodicals, Inc.

  2. Fast Laplace solver approach to pore-scale permeability

    Science.gov (United States)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  3. Religion and Communication Spaces. A Semio-pragmatic Approach

    Directory of Open Access Journals (Sweden)

    Roger Odin

    2015-11-01

    Full Text Available Following the reflection initiated in his book The Spaces of Communication, Roger Odin suggests a new distinction between physical communication spaces and mental communication spaces (spaces that we have inside us. The suggestion is exemplified by three film analyses dedicated to the relationships between religion and communication.

  4. The algebraic approach to space-time geometry

    International Nuclear Information System (INIS)

    Heller, M.; Multarzynski, P.; Sasin, W.

    1989-01-01

    A differential manifold can be defined in terms of smooth real functions carried by it. By rejecting the postulate, in such a definition, demanding the local diffeomorphism of a manifold to the Euclidean space, one obtains the so-called differential space concept. Every subset of R n turns out to be a differential space. Extensive parts of differential geometry on differential spaces, developed by Sikorski, are reviewed and adapted to relativistic purposes. Differential space as a new model of space-time is proposed. The Lorentz structure and Einstein's field equations on differential spaces are discussed. 20 refs. (author)

  5. Giant monopole transition densities within the local scale ATDHF approach

    International Nuclear Information System (INIS)

    Dimitrova, S.S.; Petkov, I.Zh.; Stoitsov, M.V.

    1986-01-01

    Transition densities for 12 C, 16 O, 28 Si, 32 S, 40 Ca, 48 Ca, 56 Ni, 90 Zr, 208 Pb even-even nuclei corresponding to nuclear glant monopole resonances obtained within a local-scale adiabatic time-dependent Hartree-Fook approach in terms of effective Skyrme-type forces SkM and S3. The approach, the particular form and all necessary coefficients of these transition densities are reported. They are of a simple analytical form and may be directly used for example in analyses of particle inelastic scattering on nuclei by distorted wave method and a such a way allowing a test of the theoretical interpretation of giant monopole resonances

  6. Novel Approaches to Cellular Transplantation from the US Space Program

    Science.gov (United States)

    Pellis, Neal R.; Homick, Jerry L. (Technical Monitor)

    1999-01-01

    Research in the treatment of type I diabetes is entering a new era that takes advantage of our knowledge in an ever increasing variety of scientific disciplines. Some may originate from very diverse sources, one of which is the Space Program at National Aeronautics and Space Administration (NASA). The Space Program contributes to diabetes-related research in several treatment modalities. As an ongoing effort for medical monitoring of personnel involved in space exploration activities NASA and the extramural scientific community investigate strategies for noninvasive estimation of blood glucose levels. Part of the effort in the space protein crystal growth program is high-resolution structural analysis insulin as a means to better understand the interaction with its receptor and with host immune components and as a basis for rational design of a "better" insulin molecule. The Space Program is also developing laser technology for potential early cataract detection as well as a noninvasive analyses for addressing preclinical diabetic retinopathy. Finally, NASA developed an exciting cell culture system that affords some unique advantages in the propagation and maintenance of mammalian cells in vitro. The cell culture system was originally designed to maintain cell suspensions with a minimum of hydrodynamic and mechanical sheer while awaiting launch into microgravity. Currently the commercially available NASA bioreactor (Synthecon, Inc., Houston, TX) is used as a research tool in basic and applied cell biology. In recent years there is continued strong interest in cellular transplantation as treatment for type I diabetes. The advantages are the potential for successful long-term amelioration and a minimum risk for morbidity in the event of rejection of the transplanted cells. The pathway to successful application of this strategy is accompanied by several substantial hurdles: (1) isolation and propagation of a suitable uniform donor cell population; (2) management of

  7. A Web Based Approach to Integrate Space Culture and Education

    Science.gov (United States)

    Gerla, F.

    2002-01-01

    , who can use it to prepare their lessons, retrieve information and organize the didactic material in order to support their lessons. We think it important to use a user centered "psychology" based on UM: we have to know the needs and expectations of the students. Our intent is to use usability tests not just to prove the site effectiveness and clearness, but also to investigate aesthetical preferences of children and young people. Physics, mathematics, chemistry are just some of the difficult learning fields connected with space technologies. Space culture is a potentially never-ending field, and our scope will be to lead students by hand in this universe of knowledge. This paper will present MARS activities in the framework of the above methodologies aimed at implementing a web based approach to integrate space culture and education. The activities are already in progress and some results will be presented in the final paper.

  8. An Open and Holistic Approach for Geo and Space Sciences

    Science.gov (United States)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Toshihiko, Iyemori; Yatagai, Akiyo; Koyama, Yukinobu; Murayama, Yasuhiro; King, Todd; Hughes, Steve; Fung, Shing; Galkin, Ivan; Hapgood, Mike; Belehaki, Anna

    2016-04-01

    Geo and space sciences thus far have been very successful, even often an open, cross-domain and holistic approach did not play an essential role. But this situation is changing rapidly. The research focus is shifting into more complex, non-linear and multi-domain specified phenomena, such as e.g. climate change or space environment. This kind of phenomena only can be understood step by step using the holistic idea. So, what is necessary for a successful cross-domain and holistic approach in geo and space sciences? Research and science in general become more and more dependent from a rich fundus of multi-domain data sources, related context information and the use of highly advanced technologies in data processing. Such buzzword phrases as Big Data and Deep Learning are reflecting this development. Big Data also addresses the real exponential growing of data and information produced by measurements or simulations. Deep Learning technology may help to detect new patterns and relationships in data describing high sophisticated natural phenomena. And further on, we should not forget science and humanities are only two sides of the same medal in the continuing human process of knowledge discovery. The concept of Open Data or in particular the open access to scientific data is addressing the free and open availability of -at least publicly founded and generated- data. The open availability of data covers the free use, reuse and redistribution of data which have been established with the formation of World Data Centers already more than 50 years ago. So, we should not forget, the foundation for open data is the responsibility of the individual scientist up until the big science institutions and organizations for a sustainable management of data. Other challenges are discovering and collecting the appropriate data, and preferably all of them or at least the majority of the right data. Therefore a network of individual or even better institutional catalog-based and at least

  9. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Directory of Open Access Journals (Sweden)

    Alex N Tidd

    Full Text Available The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential

  10. Fishing for space: fine-scale multi-sector maritime activities influence fisher location choice.

    Science.gov (United States)

    Tidd, Alex N; Vermard, Youen; Marchal, Paul; Pinnegar, John; Blanchard, Julia L; Milner-Gulland, E J

    2015-01-01

    The European Union and other states are moving towards Ecosystem Based Fisheries Management to balance food production and security with wider ecosystem concerns. Fishing is only one of several sectors operating within the ocean environment, competing for renewable and non-renewable resources that overlap in a limited space. Other sectors include marine mining, energy generation, recreation, transport and conservation. Trade-offs of these competing sectors are already part of the process but attempts to detail how the seas are being utilised have been primarily based on compilations of data on human activity at large spatial scales. Advances including satellite and shipping automatic tracking enable investigation of factors influencing fishers' choice of fishing grounds at spatial scales relevant to decision-making, including the presence or avoidance of activities by other sectors. We analyse the determinants of English and Welsh scallop-dredging fleet behaviour, including competing sectors, operating in the eastern English Channel. Results indicate aggregate mining activity, maritime traffic, increased fishing costs, and the English inshore 6 and French 12 nautical mile limits negatively impact fishers' likelihood of fishing in otherwise suitable areas. Past success, net-benefits and fishing within the 12 NM predispose fishers to use areas. Systematic conservation planning has yet to be widely applied in marine systems, and the dynamics of spatial overlap of fishing with other activities have not been studied at scales relevant to fisher decision-making. This study demonstrates fisher decision-making is indeed affected by the real-time presence of other sectors in an area, and therefore trade-offs which need to be accounted for in marine planning. As marine resource extraction demands intensify, governments will need to take a more proactive approach to resolving these trade-offs, and studies such as this will be required as the evidential foundation for future

  11. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  13. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  14. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  15. Next Generation Space Interconnect Standard (NGSIS): a modular open standards approach for high performance interconnects for space

    Science.gov (United States)

    Collier, Charles Patrick

    2017-04-01

    The Next Generation Space Interconnect Standard (NGSIS) effort is a Government-Industry collaboration effort to define a set of standards for interconnects between space system components with the goal of cost effectively removing bandwidth as a constraint for future space systems. The NGSIS team has selected the ANSI/VITA 65 OpenVPXTM standard family for the physical baseline. The RapidIO protocol has been selected as the basis for the digital data transport. The NGSIS standards are developed to provide sufficient flexibility to enable users to implement a variety of system configurations, while meeting goals for interoperability and robustness for space. The NGSIS approach and effort represents a radical departure from past approaches to achieve a Modular Open System Architecture (MOSA) for space systems and serves as an exemplar for the civil, commercial, and military Space communities as well as a broader high reliability terrestrial market.

  16. Analysis of Life Histories: A State Space Approach

    Directory of Open Access Journals (Sweden)

    Rajulton, Fernando

    2001-01-01

    Full Text Available EnglishThe computer package LIFEHIST written by the author, is meant for analyzinglife histories through a state-space approach. Basic ideas on which the various programs have beenbuilt are described in this paper in a non-mathematical language. Users can use various programs formultistate analyses based on Markov and semi-Markov frameworks and sequences of transitions implied inlife histories. The package is under constant revision and programs for using a few specific modelsthe author thinks will be useful for analyzing longitudinal data will be incorporated in the nearfuture.FrenchLe système d'ordinateur LIFEHIST écrit par l'auteur est établi pour analyser desévénements au cours de la vie par une approche qui tient compte des états aucours du temps. Les idées fondamentales à la base des divers programmes dumodule sont décrites dans un langage non-mathématique. Le systèmeLIFEHIST peut être utilisé pour des analyses Markov et semi-Markov desséquences d’événements au cours de la vie. Le module est sous révisionconstante, et des programmes que l’auteur compte ajouter pour l'usage dedonnées longitudinales sont décrit.

  17. Properties of small-scale interfacial turbulence from a novel thermography based approach

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales

  18. A Reparametrization Approach for Dynamic Space-Time Models

    OpenAIRE

    Lee, Hyeyoung; Ghosh, Sujit K.

    2008-01-01

    Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to...

  19. Spatial Polygamy and Contextual Exposures (SPACEs): Promoting Activity Space Approaches in Research on Place and Health

    Science.gov (United States)

    Matthews, Stephen A.; Yang, Tse-Chuan

    2014-01-01

    Exposure science has developed rapidly and there is an increasing call for greater precision in the measurement of individual exposures across space and time. Social science interest in an individual’s environmental exposure, broadly conceived, has arguably been quite limited conceptually and methodologically. Indeed, we appear to lag behind our exposure science colleagues in our theories, data, and methods. In this paper we discuss a framework based on the concept of spatial polygamy to demonstrate the need to collect new forms of data on human spatial behavior and contextual exposures across time and space. Adopting new data and methods will be essential if we want to better understand social inequality in terms of exposure to health risks and access to health resources. We discuss the opportunities and challenges focusing on the potential seemingly offered by focusing on human mobility, and specifically the utilization of activity space concepts and data. A goal of the paper is to spatialize social and health science concepts and research practice vis-a-vis the complexity of exposure. The paper concludes with some recommendations for future research focusing on theoretical and conceptual development, promoting research on new types of places and human movement, the dynamic nature of contexts, and on training. “When we elect wittingly or unwittingly, to work within a level … we tend to discern or construct – whichever emphasis you prefer – only those kinds of systems whose elements are confined to that level.”Otis Dudley Duncan (1961, p. 141). “…despite the new ranges created by improved transportation, local government units have tended to remain medieval in size.”Torsten Hägerstrand (1970, p.18) “A detective investigating a crime needs both tools and understanding. If he has no fingerprint powder, he will fail to find fingerprints on most surfaces. If he does not understand where the criminal is likely to have put his fingers, he will not

  20. Scaling up biomass gasifier use: an application-specific approach

    International Nuclear Information System (INIS)

    Ghosh, Debyani; Sagar, Ambuj D.; Kishore, V.V.N.

    2006-01-01

    Biomass energy accounts for about 11% of the global primary energy supply, and it is estimated that about 2 billion people worldwide depend on biomass for their energy needs. Yet, most of the use of biomass is in a primitive and inefficient manner, primarily in developing countries, leading to a host of adverse implications on human health, environment, workplace conditions, and social well being. Therefore, the utilization of biomass in a clean and efficient manner to deliver modern energy services to the world's poor remains an imperative for the development community. One possible approach to do this is through the use of biomass gasifiers. Although significant efforts have been directed towards developing and deploying biomass gasifiers in many countries, scaling up their dissemination remains an elusive goal. Based on an examination of biomass gasifier development, demonstration, and deployment efforts in India-a country with more than two decades of experiences in biomass gasifier development and dissemination, this article identifies a number of barriers that have hindered widespread deployment of biomass gasifier-based energy systems. It also suggests a possible approach for moving forward, which involves a focus on specific application areas that satisfy a set of criteria that are critical to deployment of biomass gasifiers, and then tailoring the scaling up strategy to the characteristics of the user groups for that application. Our technical, financial, economic and institutional analysis suggests an initial focus on four categories of applications-small and medium enterprises, the informal sector, biomass-processing industries, and some rural areas-may be particularly feasible and fruitful

  1. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  2. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  3. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  4. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  5. Mapping the Hot Spots: A Zoning Approach to Space Analysis and Design

    Science.gov (United States)

    Bunnell, Adam; Carpenter, Russell; Hensley, Emily; Strong, Kelsey; Williams, ReBecca; Winter, Rachel

    2016-01-01

    This article examines a preliminary approach to space design developed and implemented in Eastern Kentucky University's Noel Studio for Academic Creativity. The approach discussed here is entitled "hot spots," which has allowed the research team to observe trends in space usage and composing activities among students. This approach has…

  6. Fractional Sobolev’s Spaces on Time Scales via Conformable Fractional Calculus and Their Application to a Fractional Differential Equation on Time Scales

    Directory of Open Access Journals (Sweden)

    Yanning Wang

    2016-01-01

    Full Text Available Using conformable fractional calculus on time scales, we first introduce fractional Sobolev spaces on time scales, characterize them, and define weak conformable fractional derivatives. Second, we prove the equivalence of some norms in the introduced spaces and derive their completeness, reflexivity, uniform convexity, and compactness of some imbeddings, which can be regarded as a novelty item. Then, as an application, we present a recent approach via variational methods and critical point theory to obtain the existence of solutions for a p-Laplacian conformable fractional differential equation boundary value problem on time scale T:  Tα(Tαup-2Tα(u(t=∇F(σ(t,u(σ(t, Δ-a.e.  t∈a,bTκ2, u(a-u(b=0, Tα(u(a-Tα(u(b=0, where Tα(u(t denotes the conformable fractional derivative of u of order α at t, σ is the forward jump operator, a,b∈T,  01, and F:[0,T]T×RN→R. By establishing a proper variational setting, we obtain three existence results. Finally, we present two examples to illustrate the feasibility and effectiveness of the existence results.

  7. Scale Effect of Premixed Methane-Air Combustion in Confined Space Using LES Model

    Directory of Open Access Journals (Sweden)

    Liang Wang

    2015-12-01

    Full Text Available Gas explosion is the most hazardous incident occurring in underground airways. Computational Fluid Dynamics (CFD techniques are sophisticated in simulating explosions in confined spaces; specifically, when testing large-scale gaseous explosions, such as methane explosions in underground mines. The dimensions of a confined space where explosions could occur vary significantly. Thus, the scale effect on explosion parameters is worth investigating. In this paper, the impact of scaling on explosion overpressures is investigated by employing two scaling factors: The Gas-fill Length Scaling Factor (FLSF and the Hydraulic Diameter Scaling Factor (HDSF. The combinations of eight FLSFs and five HDSFs will cover a wide range of space dimensions where flammable gas could accumulate. Experiments were also conducted to evaluate the selected numerical models. The Large Eddy Simulation turbulence model was selected because it shows accuracy compared to the widely used Reynolds’ averaged models for the scenarios investigated in the experiments. Three major conclusions can be drawn: (1 The overpressure increases with both FLSF and HDSF within the deflagration regime; (2 In an explosion duct with a length to diameter ratio greater than 54, detonation is more likely to be triggered for a stoichiometric methane/air mixture; (3 Overpressure increases as an increment hydraulic diameter of a geometry within deflagration regime. A relative error of 7% is found when predicting blast peak overpressure for the base case compared to the experiment; a good agreement for the wave arrival time is also achieved.

  8. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  9. A risk-based approach to flammable gas detector spacing.

    Science.gov (United States)

    Defriend, Stephen; Dejmek, Mark; Porter, Leisa; Deshotels, Bob; Natvig, Bernt

    2008-11-15

    Flammable gas detectors allow an operating company to address leaks before they become serious, by automatically alarming and by initiating isolation and safe venting. Without effective gas detection, there is very limited defense against a flammable gas leak developing into a fire or explosion that could cause loss of life or escalate to cascading failures of nearby vessels, piping, and equipment. While it is commonly recognized that some gas detectors are needed in a process plant containing flammable gas or volatile liquids, there is usually a question of how many are needed. The areas that need protection can be determined by dispersion modeling from potential leak sites. Within the areas that must be protected, the spacing of detectors (or alternatively, number of detectors) should be based on risk. Detector design can be characterized by spacing criteria, which is convenient for design - or alternatively by number of detectors, which is convenient for cost reporting. The factors that influence the risk are site-specific, including process conditions, chemical composition, number of potential leak sites, piping design standards, arrangement of plant equipment and structures, design of isolation and depressurization systems, and frequency of detector testing. Site-specific factors such as those just mentioned affect the size of flammable gas cloud that must be detected (within a specified probability) by the gas detection system. A probability of detection must be specified that gives a design with a tolerable risk of fires and explosions. To determine the optimum spacing of detectors, it is important to consider the probability that a detector will fail at some time and be inoperative until replaced or repaired. A cost-effective approach is based on the combined risk from a representative selection of leakage scenarios, rather than a worst-case evaluation. This means that probability and severity of leak consequences must be evaluated together. In marine and

  10. Space Station - An integrated approach to operational logistics support

    Science.gov (United States)

    Hosmer, G. J.

    1986-01-01

    Development of an efficient and cost effective operational logistics system for the Space Station will require logistics planning early in the program's design and development phase. This paper will focus on Integrated Logistics Support (ILS) Program techniques and their application to the Space Station program design, production and deployment phases to assure the development of an effective and cost efficient operational logistics system. The paper will provide the methodology and time-phased programmatic steps required to establish a Space Station ILS Program that will provide an operational logistics system based on planned Space Station program logistics support.

  11. Space-Wise approach for airborne gravity data modelling

    Science.gov (United States)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  12. Space commerce in a global economy - Comparison of international approaches to commercial space

    Science.gov (United States)

    Stone, Barbara A.; Kleber, Peter

    1992-01-01

    A historical perspective, current status, and comparison of national government/commercial space industry relationships in the United States and Europe are presented. It is noted that space technology has been developed and used primarily to meet the needs of civil and military government initiatives. Two future trends of space technology development include new space enterprises, and the national drive to achieve a more competitive global economic position.

  13. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    Science.gov (United States)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  14. A Systems Approach to Developing an Affordable Space Ground Transportation Architecture using a Commonality Approach

    Science.gov (United States)

    Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.

    2012-01-01

    This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.

  15. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  16. Geometric approach to evolution problems in metric spaces

    NARCIS (Netherlands)

    Stojković, Igor

    2011-01-01

    This PhD thesis contains four chapters where research material is presented. In the second chapter the extension of the product formulas for semigroups induced by convex functionals, from the classical Hilbert space setting to the setting of general CAT(0) spaces. In the third chapter, the

  17. Evaluating public space pedestrian accessibility: a GIS approach

    NARCIS (Netherlands)

    Morar, T.; Bertolini, L.; Radoslav, R.

    2013-01-01

    Public spaces are sources of quality of life in neighborhoods. Seeking to help professionals and municipalities assess how well a public space can be used by the community it serves, this paper presents a GIS-based methodology for evaluating its pedestrian accessibility. The Romanian city of

  18. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  19. Combining different types of scale space interest points using canonical sets

    NARCIS (Netherlands)

    Kanters, F.M.W.; Denton, T.; Shokoufandeh, A.; Florack, L.M.J.; Haar Romenij, ter B.M.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    Scale space interest points capture important photometric and deep structure information of an image. The information content of such points can be made explicit using image reconstruction. In this paper we will consider the problem of combining multiple types of interest points used for image

  20. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy; Jun, Mikyoung; Park, Cheolwoo

    2012-01-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests

  1. Scale Space Methods for Analysis of Type 2 Diabetes Patients' Blood Glucose Values

    Directory of Open Access Journals (Sweden)

    Stein Olav Skrøvseth

    2011-01-01

    Full Text Available We describe how scale space methods can be used for quantitative analysis of blood glucose concentrations from type 2 diabetes patients. Blood glucose values were recorded voluntarily by the patients over one full year as part of a self-management process, where the time and frequency of the recordings are decided by the patients. This makes a unique dataset in its extent, though with a large variation in reliability of the recordings. Scale space and frequency space techniques are suited to reveal important features of unevenly sampled data, and useful for identifying medically relevant features for use both by patients as part of their self-management process, and provide useful information for physicians.

  2. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States)

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de

  3. Biodiversity conservation in agriculture requires a multi-scale approach.

    Science.gov (United States)

    Gonthier, David J; Ennis, Katherine K; Farinas, Serge; Hsieh, Hsun-Yi; Iverson, Aaron L; Batáry, Péter; Rudolphi, Jörgen; Tscharntke, Teja; Cardinale, Bradley J; Perfecto, Ivette

    2014-09-22

    Biodiversity loss--one of the most prominent forms of modern environmental change--has been heavily driven by terrestrial habitat loss and, in particular, the spread and intensification of agriculture. Expanding agricultural land-use has led to the search for strong conservation strategies, with some suggesting that biodiversity conservation in agriculture is best maximized by reducing local management intensity, such as fertilizer and pesticide application. Others highlight the importance of landscape-level approaches that incorporate natural or semi-natural areas in landscapes surrounding farms. Here, we show that both of these practices are valuable to the conservation of biodiversity, and that either local or landscape factors can be most crucial to conservation planning depending on which types of organisms one wishes to save. We performed a quantitative review of 266 observations taken from 31 studies that compared the impacts of localized (within farm) management strategies and landscape complexity (around farms) on the richness and abundance of plant, invertebrate and vertebrate species in agro-ecosystems. While both factors significantly impacted species richness, the richness of sessile plants increased with less-intensive local management, but did not significantly respond to landscape complexity. By contrast, the richness of mobile vertebrates increased with landscape complexity, but did not significantly increase with less-intensive local management. Invertebrate richness and abundance responded to both factors. Our analyses point to clear differences in how various groups of organisms respond to differing scales of management, and suggest that preservation of multiple taxonomic groups will require multiple scales of conservation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Space Station Freedom - Approaching the critical design phase

    Science.gov (United States)

    Kohrs, Richard H.; Huckins, Earle, III

    1992-01-01

    The status and future developments of the Space Station Freedom are discussed. To date detailed design drawings are being produced to manufacture SSF hardware. A critical design review (CDR) for the man-tended capability configuration is planned to be performed in 1993 under the SSF program. The main objective of the CDR is to enable the program to make a full commitment to proceed to manufacture parts and assemblies. NASA recently signed a contract with the Russian space company, NPO Energia, to evaluate potential applications of various Russian space hardware for on-going NASA programs.

  5. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  6. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  7. The XML approach to implementing space link extension service management

    Science.gov (United States)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  8. Pattern recognition in probability spaces for visualization and identification of plasma confinement regimes and confinement time scaling

    International Nuclear Information System (INIS)

    Verdoolaege, G; Karagounis, G; Oost, G Van; Tendler, M

    2012-01-01

    Pattern recognition is becoming an increasingly important tool for making inferences from the massive amounts of data produced in fusion experiments. The purpose is to contribute to physics studies and plasma control. In this work, we address the visualization of plasma confinement data, the (real-time) identification of confinement regimes and the establishment of a scaling law for the energy confinement time. We take an intrinsically probabilistic approach, modeling data from the International Global H-mode Confinement Database with Gaussian distributions. We show that pattern recognition operations working in the associated probability space are considerably more powerful than their counterparts in a Euclidean data space. This opens up new possibilities for analyzing confinement data and for fusion data processing in general. We hence advocate the essential role played by measurement uncertainty for data interpretation in fusion experiments. (paper)

  9. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  10. Coupled radiative gasdynamic interaction and non-equilibrium dissociation for large-scale returned space vehicles

    International Nuclear Information System (INIS)

    Surzhikov, S.

    2012-01-01

    Graphical abstract: It has been shown that different coupled vibrational dissociation models, being applied for solving coupled radiative gasdynamic problems for large size space vehicles, exert noticeable effect on radiative heating of its surface at orbital entry on high altitudes (h ⩾ 70 km). This influence decreases with decreasing the space vehicles sizes. Figure shows translational (solid lines) and vibrational (dashed lines) temperatures in shock layer with (circle markers) and without (triangles markers) radiative-gasdynamic interaction for one trajectory point of entering space vehicle. Highlights: ► Nonequilibrium dissociation processes exert effect on radiation heating of space vehicles (SV). ► The radiation gas dynamic interaction enhances this influence. ► This influence increases with increasing the SV sizes. - Abstract: Radiative aerothermodynamics of large-scale space vehicles is considered for Earth orbital entry at zero angle of attack. Brief description of used radiative gasdynamic model of physically and chemically nonequilibrium, viscous, heat conductive and radiative gas of complex chemical composition is presented. Radiation gasdynamic (RadGD) interaction in high temperature shock layer is studied by means of numerical experiment. It is shown that radiation–gasdynamic coupling for orbital space vehicles of large size is important for high altitude part of entering trajectory. It is demonstrated that the use of different models of coupled vibrational dissociation (CVD) in conditions of RadGD interaction gives rise temperature variation in shock layer and, as a result, leads to significant variation of radiative heating of space vehicle.

  11. An optimal beam alignment method for large-scale distributed space surveillance radar system

    Science.gov (United States)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  12. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System

    Directory of Open Access Journals (Sweden)

    Eunah Choi

    2017-07-01

    Full Text Available Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  13. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System.

    Science.gov (United States)

    Choi, Eunah; Lee, Sangyoon; Hong, Hyunki

    2017-07-21

    Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component) regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG) or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  14. A new approach to the analysis of the phase space of f(R)-gravity

    Energy Technology Data Exchange (ETDEWEB)

    Carloni, S., E-mail: sante.carloni@tecnico.ulisboa.pt [Centro Multidisciplinar de Astrofisica—CENTRA, Instituto Superior Tecnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049-001 (Portugal)

    2015-09-01

    We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models.

  15. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir; Seljak, Uroš [Institute for Theoretical Physics, University of Zürich, Zürich (Switzerland); Okumura, Teppei [Institute for the Early Universe, Ewha Womans University, Seoul, S. Korea (Korea, Republic of); Desjacques, Vincent, E-mail: zvlah@physik.uzh.ch, E-mail: seljak@physik.uzh.ch, E-mail: teppei@ewha.ac.kr, E-mail: Vincent.Desjacques@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics (CAP) Université de Genéve, Genéve (Switzerland)

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  16. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    Science.gov (United States)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  17. The canonical Lagrangian approach to three-space general relativity

    Science.gov (United States)

    Shyam, Vasudev; Venkatesh, Madhavan

    2013-07-01

    We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.

  18. Lie-Hamilton systems on curved spaces: a geometrical approach

    Science.gov (United States)

    Herranz, Francisco J.; de Lucas, Javier; Tobolski, Mariusz

    2017-12-01

    A Lie-Hamilton system is a nonautonomous system of first-order ordinary differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional Lie algebra, a Vessiot-Guldberg Lie algebra, of Hamiltonian vector fields relative to a Poisson structure. Its general solution can be written as an autonomous function, the superposition rule, of a generic finite family of particular solutions and a set of constants. We pioneer the study of Lie-Hamilton systems on Riemannian spaces (sphere, Euclidean and hyperbolic plane), pseudo-Riemannian spaces (anti-de Sitter, de Sitter, and Minkowski spacetimes) as well as on semi-Riemannian spaces (Newtonian spacetimes). Their corresponding constants of motion and superposition rules are obtained explicitly in a geometric way. This work extends the (graded) contraction of Lie algebras to a contraction procedure for Lie algebras of vector fields, Hamiltonian functions, and related symplectic structures, invariants, and superposition rules.

  19. From Planetary Boundaries to national fair shares of the global safe operating space - How can the scales be bridged?

    Science.gov (United States)

    Häyhä, Tiina; Cornell, Sarah; Lucas, Paul; van Vuuren, Detlef; Hoff, Holger

    2016-04-01

    The planetary boundaries framework proposes precautionary quantitative global limits to the anthropogenic perturbation of crucial Earth system processes. In this way, it marks out a planetary 'safe operating space' for human activities. However, decisions regarding resource use and emissions are mostly made at much smaller scales, mostly by (sub-)national and regional governments, businesses, and other local actors. To operationalize the planetary boundaries, they need to be translated into and aligned with targets that are relevant at these smaller scales. In this paper, we develop a framework that addresses the three dimension of bridging across scales: biophysical, socio-economic and ethical, to provide a consistent universally applicable approach for translating the planetary boundaries into national level context-specific and fair shares of the safe operating space. We discuss our findings in the context of previous studies and their implications for future analyses and policymaking. In this way, we help link the planetary boundaries framework to widely- applied operational and policy concepts for more robust strong sustainability decision-making.

  20. A multilevel control approach for a modular structured space platform

    Science.gov (United States)

    Chichester, F. D.; Borelli, M. T.

    1981-01-01

    A three axis mathematical representation of a modular assembled space platform consisting of interconnected discrete masses, including a deployable truss module, was derived for digital computer simulation. The platform attitude control system as developed to provide multilevel control utilizing the Gauss-Seidel second level formulation along with an extended form of linear quadratic regulator techniques. The objectives of the multilevel control are to decouple the space platform's spatial axes and to accommodate the modification of the platform's configuration for each of the decoupled axes.

  1. Noise pollution mapping approach and accuracy on landscape scales.

    Science.gov (United States)

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. The group approach to AdS space propagators

    International Nuclear Information System (INIS)

    Leonhardt, Thorsten; Manvelyan, Ruben; Ruehl, Werner

    2003-01-01

    We show that AdS two-point functions can be obtained by connecting two points in the interior of AdS space with one point on its boundary by a dual pair of Dobrev's boundary-to-bulk intertwiners and integrating over the boundary point

  3. Hybrid Enhanced Epidermal SpaceSuit Design Approaches

    Science.gov (United States)

    Jessup, Joseph M.

    A Space suit that does not rely on gas pressurization is a multi-faceted problem that requires major stability controls to be incorporated during design and construction. The concept of Hybrid Epidermal Enhancement space suit integrates evolved human anthropomorphic and physiological adaptations into its functionality, using commercially available bio-medical technologies to address shortcomings of conventional gas pressure suits, and the impracticalities of MCP suits. The prototype HEE Space Suit explored integumentary homeostasis, thermal control and mobility using advanced bio-medical materials technology and construction concepts. The goal was a space suit that functions as an enhanced, multi-functional bio-mimic of the human epidermal layer that works in attunement with the wearer rather than as a separate system. In addressing human physiological requirements for design and construction of the HEE suit, testing regimes were devised and integrated into the prototype which was then subject to a series of detailed tests using both anatomical reproduction methods and human subject.

  4. An approach to developing user interfaces for space systems

    Science.gov (United States)

    Shackelford, Keith; McKinney, Karen

    1993-08-01

    Inherent weakness in the traditional waterfall model of software development has led to the definition of the spiral model. The spiral model software development lifecycle model, however, has not been applied to NASA projects. This paper describes its use in developing real time user interface software for an Environmental Control and Life Support System (ECLSS) Process Control Prototype at NASA's Marshall Space Flight Center.

  5. Real-space renormalization group approach to driven diffusive systems

    Energy Technology Data Exchange (ETDEWEB)

    Hanney, T [SUPA and School of Physics, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ (United Kingdom); Stinchcombe, R B [Theoretical Physics, 1 Keble Road, Oxford, OX1 3NP (United Kingdom)

    2006-11-24

    We introduce a real-space renormalization group procedure for driven diffusive systems which predicts both steady state and dynamic properties. We apply the method to the boundary driven asymmetric simple exclusion process and recover exact results for the steady state phase diagram, as well as the crossovers in the relaxation dynamics for each phase.

  6. Real-space renormalization group approach to driven diffusive systems

    International Nuclear Information System (INIS)

    Hanney, T; Stinchcombe, R B

    2006-01-01

    We introduce a real-space renormalization group procedure for driven diffusive systems which predicts both steady state and dynamic properties. We apply the method to the boundary driven asymmetric simple exclusion process and recover exact results for the steady state phase diagram, as well as the crossovers in the relaxation dynamics for each phase

  7. Quantitative approach to measuring the cerebrospinal fluid space with CT

    Energy Technology Data Exchange (ETDEWEB)

    Zeumer, H.; Hacke, W.; Hartwich, P.

    1982-01-01

    A method for measuring the subarachnoid space by using an independent CT evaluation unit is described. The normal values have been calculated for patients, according to age, and three examples are presented demonstrating reversible decrease of brain volume in patients suffering anorexia nervosa and chronic alcoholism.

  8. Long-Term Memory: A State-Space Approach

    Science.gov (United States)

    Kiss, George R.

    1972-01-01

    Some salient concepts derived from the information sciences and currently used in theories of human memory are critically reviewed. The application of automata theory is proposed as a new approach in this field. The approach is illustrated by applying it to verbal memory. (Author)

  9. A Conceptual Approach for Optimising Bus Stop Spacing

    Science.gov (United States)

    Johar, Amita; Jain, S. S.; Garg, P. k.

    2017-06-01

    An efficient public transportation system is essential of any country. The growth, development and shape of the urban areas are mainly due to availability of good transportation (Shah et al. in Inst Town Plan India J 5(3):50-59, 1). In developing countries, like India, travel by local bus in a city is very common. The accidents, congestion, pollution and appropriate location of bus stops are the major problems arising in metropolitan cities. Among all the metropolitan cities in India, Delhi has highest percentage of growth of population and vehicles. Therefore, it is important to adopt efficient and effective ways to improve mobility in different metropolitan cities in order to overcome the problem and to reduce the number of private vehicles on the road. The primary objective of this paper is to present a methodology for developing a model for optimum bus stop spacing (OBSS). It describes the evaluation of existing urban bus route, data collection, development of model for optimizing urban bus route and application of model. In this work, the bus passenger generalized cost method is used to optimize the spacing between bus stops. For the development of model, a computer program is required to be written. The applicability of the model has been evaluated by taking the data of urban bus route of Delhi Transport Corporation (DTC) in Excel sheet in first phase. Later on, it is proposed to develop a programming in C++ language. The developed model is expected to be useful to transport planner for rational design of the spacing of bus stops to save travel time and to generalize operating cost. After analysis it is found that spacing between the bus stop comes out to be between 250 and 500 m. The Proposed Spacing of bus stops is done considering the points that they don't come nearer to metro/rail station, entry or exit of flyover and near traffic signal.

  10. An introduction to data reduction: space-group determination, scaling and intensity statistics.

    Science.gov (United States)

    Evans, Philip R

    2011-04-01

    This paper presents an overview of how to run the CCP4 programs for data reduction (SCALA, POINTLESS and CTRUNCATE) through the CCP4 graphical interface ccp4i and points out some issues that need to be considered, together with a few examples. It covers determination of the point-group symmetry of the diffraction data (the Laue group), which is required for the subsequent scaling step, examination of systematic absences, which in many cases will allow inference of the space group, putting multiple data sets on a common indexing system when there are alternatives, the scaling step itself, which produces a large set of data-quality indicators, estimation of |F| from intensity and finally examination of intensity statistics to detect crystal pathologies such as twinning. An appendix outlines the scoring schemes used by the program POINTLESS to assign probabilities to possible Laue and space groups.

  11. Approaches to Outdoor Thermal Comfort Thresholds through Public Space Design: A Review

    Directory of Open Access Journals (Sweden)

    Andre Santos Nouri

    2018-03-01

    Full Text Available Based on the Köppen Geiger (KG classification system, this review article examines existing studies and projects that have endeavoured to address local outdoor thermal comfort thresholds through Public Space Design (PSD. The review is divided into two sequential stages, whereby (1 overall existing approaches to pedestrian thermal comfort thresholds are reviewed within both quantitative and qualitative spectrums; and (2 the different techniques and measures are reviewed and framed into four Measure Review Frameworks (MRFs, in which each type of PSD measure is presented alongside its respective local scale urban specificities/conditions and their resulting thermal attenuation outcomes. The result of this review article is the assessment of how current practices of PSD within three specific subcategories of the KG ‘Temperate’ group have addressed microclimatic aggravations such as elevated urban temperatures and Urban Heat Island (UHI effects. Based upon a bottom-up approach, the interdisciplinary practice of PSD is hence approached as a means to address existing and future thermal risk factors within the urban public realm in an era of potential climate change.

  12. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  13. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  14. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    Science.gov (United States)

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  15. Embodied Space: a Sensorial Approach to Spatial Experience

    Science.gov (United States)

    Durão, Maria João

    2009-03-01

    A reflection is presented on the significance of the role of the body in the interpretation and future creation of spatial living structures. The paper draws on the body as cartography of sensorial meaning that includes vision, touch, smell, hearing, orientation and movement to discuss possible relationships with psychological and sociological parameters of 'sensorial space'. The complex dynamics of body-space is further explored from the standpoint of perceptual variables such as color, light, materialities, texture and their connections with design, technology, culture and symbology. Finally, the paper discusses the integration of knowledge and experimentation in the design of future habitats where body-sensitive frameworks encompass flexibility, communication, interaction and cognitive-driven solutions.

  16. A Continuum Mechanical Approach to Geodesics in Shape Space

    Science.gov (United States)

    2010-01-01

    mean curvature flow equation. Calc. Var., 3:253–271, 1995. [30] Siddharth Manay, Daniel Cremers , Byung-Woo Hong, Anthony J. Yezzi, and Stefano Soatto...P. W. Michor and D. Mumford. Riemannian geometries on spaces of plane curves. J. Eur. Math. Soc., 8:1–48, 2006. 37 [33] Peter W. Michor, David ... Cremers . Shape matching by variational computation of geodesics on a manifold. In Pattern Recognition, LNCS 4174, pages 142–151, 2006. [38] P

  17. Analytical Approach to Space- and Time-Fractional Burgers Equations

    International Nuclear Information System (INIS)

    Yıldırım, Ahmet; Mohyud-Din, Syed Tauseef

    2010-01-01

    A scheme is developed to study numerical solution of the space- and time-fractional Burgers equations under initial conditions by the homotopy analysis method. The fractional derivatives are considered in the Caputo sense. The solutions are given in the form of series with easily computable terms. Numerical solutions are calculated for the fractional Burgers equation to show the nature of solution as the fractional derivative parameter is changed

  18. Stage I surface crack formation in thermal fatigue: A predictive multi-scale approach

    International Nuclear Information System (INIS)

    Osterstock, S.; Robertson, C.; Sauzay, M.; Aubin, V.; Degallaix, S.

    2010-01-01

    A multi-scale numerical model is developed, predicting the formation of stage I cracks, in thermal fatigue loading conditions. The proposed approach comprises 2 distinct calculation steps. Firstly, the number of cycles to micro-crack initiation is determined, in individual grains. The adopted initiation model depends on local stress-strain conditions, relative to sub-grain plasticity, grain orientation and grain deformation incompatibilities. Secondly, 2-4 grains long surface cracks (stage I) is predicted, by accounting for micro-crack coalescence, in 3 dimensions. The method described in this paper is applied to a 500 grains aggregate, loaded in representative thermal fatigue conditions. Preliminary results provide quantitative insight regarding position, density, spacing and orientations of stage I surface cracks and subsequent formation of crack networks. The proposed method is fully deterministic, provided all grain crystallographic orientations and micro-crack linking thresholds are specified. (authors)

  19. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  20. Quantum scaling in many-body systems an approach to quantum phase transitions

    CERN Document Server

    Continentino, Mucio

    2017-01-01

    Quantum phase transitions are strongly relevant in a number of fields, ranging from condensed matter to cold atom physics and quantum field theory. This book, now in its second edition, approaches the problem of quantum phase transitions from a new and unifying perspective. Topics addressed include the concepts of scale and time invariance and their significance for quantum criticality, as well as brand new chapters on superfluid and superconductor quantum critical points, and quantum first order transitions. The renormalisation group in real and momentum space is also established as the proper language to describe the behaviour of systems close to a quantum phase transition. These phenomena introduce a number of theoretical challenges which are of major importance for driving new experiments. Being strongly motivated and oriented towards understanding experimental results, this is an excellent text for graduates, as well as theorists, experimentalists and those with an interest in quantum criticality.

  1. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  2. Approaches to recreational landscape scaling of mountain resorts

    Science.gov (United States)

    Chalaya, Elena; Efimenko, Natalia; Povolotskaia, Nina; Slepih, Vladimir

    2013-04-01

    In the mountain resorts (MR) the climate and the landscape are natural medical resources which are very sensitive to anthropogenic influences [EGU2011-6740-3; EGU2012-6103]. Positive experience of the climatic and landscape treatment at the MR of the North Caucasus allowed us to establish fundamental interrelation between the quality of recreational landscapes (RL), climatic conditions and the efficiency of medical rehabilitation of people at the MR on the basis of rational use of natural medical resources. There have been registered the following bioclimatic distinctions and physiological responses with the recipients suffering from high disadaptation according to the results of the complex route medical and geophysical studies on the urban and park landscapes. We have defined hot discomfort at the open space of urban territory when the weather is extremely hot and anticyclone - the thermal balance (TB) is higher than +840 W/sq.m, extreme risk of solar erythema burn - UVI - higher than 11, the low content of natural anions - lower than 260 ion/cm3, high coefficient of ions unipolarity (CIU) - 2.16 and a high temperature of the underlying surface (asphalt) 46.40C. At the same time in the resort park of vegetable association Bétula péndula (50 years) TB was significantly lower - +480 W/sq.m, there was no risk of erythema burn (UVI 4), an optimum level of natural anions was 840 ion/cm3 and the value of CIU was 0.98, grass and soil temperature was + 290C and there was a favourable background of evaporating metabolites. At such favourable bioclimatic change the patients have been registered to have the voltage reduction of the vegetative index (from 640 to 380; N-150), the increase in efficiency of neurohumoral regulation (from 0.12 to 0.34; N 0,50), the decrease in spectrum excitability of brain activity in the range of waves: delta 0 … 0.4Hz by 16%, the increase in work activity of the brain in the range of waves: thetra 4 … 8 Hz, alpha 8 … 13 Hz. beta 13

  3. Development of Indigenous Basic Interest Scales: Re-Structuring the Icelandic Interest Space

    Science.gov (United States)

    Einarsdottir, Sif; Eyjolfsdottir, Katrin Osk; Rounds, James

    2013-01-01

    The present investigation used an emic approach to develop a set of Icelandic indigenous basic interest scales. An indigenous item pool that is representative of the Icelandic labor market was administered to three samples (N = 1043, 1368, and 2218) of upper secondary and higher education students in two studies. A series of item level cluster and…

  4. Space Acquisitions: Challenges Facing DOD as it Changes Approaches to Space Acquisitions

    Science.gov (United States)

    2016-03-09

    alternatives to support decisions about the future of space programs, there are gaps in cost and other data needed to weigh the pros and cons of changes to...preliminary work suggests there are gaps in cost and other data needed to weigh the pros and cons of changes to space systems. Second, most changes...Facebook, Flickr, Twitter, and YouTube . Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts and read The Watchblog. Visit GAO on the

  5. EXPERIMENTAL STUDIES ON DIFFICULTY OF EVACUATION FROM UNDERGROUND SPACES UNDER INUNDATED SITUATIONS USING REAL SCALE MODELS

    Science.gov (United States)

    Baba, Yasuyuki; Ishigaki, Taisuke; Toda, Keiichi; Nakagawa, Hajime

    Many urbanized cities in Japan are located in alluvial plains, and the vulnerability of urbanized areas to flood disaster is highlighted by flood attacks due to heavy rain fall or typhoons. Underground spaces located in the urbanized area are flood-prone areas, and the intrusion of flood watar into underground space inflicted severe damages on urban functions and infrastructures. In a similar way, low-lying areas like "bowl-shaped" depression and underpasses under highway and railroad bridges are also prone to floods. The underpasses are common sites of accidents of submerged vehicles, and severe damage including human damage occasionally occurs under flooding conditions. To reduce the damage due to inundation in underground space, needless to say, early evacuation is one of the most important countermeasures. This paper shows some experimental results of evacuation tests from underground spaces under inundated situations. The difficulities of the evacuation from underground space has been investigated by using real scale models (door, staircase and vehicle), and the limit for safety evacuation is discussed. From the results, it is found that water depth of 0.3 - 0.4m would be a critical situation for the evacuation from underground space through staircases and door and that 0.7 - 0.8m deep on the ground would be also a critical situation for safety evacuation though the doors of the vehicle. These criteria have some possibility to vary according to different inundated situations, and they are also influenced by the individual variation like the difference of physical strength. This means that these criteria requires cautious stance to use although they show a sort of an index of the limitation for saftty evacuation from underground space.

  6. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  7. Coset Space Dimensional Reduction approach to the Standard Model

    International Nuclear Information System (INIS)

    Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.

    1988-01-01

    We present a unified theory in ten dimensions based on the gauge group E 8 , which is dimensionally reduced to the Standard Mode SU 3c xSU 2 -LxU 1 , which breaks further spontaneously to SU 3L xU 1em . The model gives similar predictions for sin 2 θ w and proton decay as the minimal SU 5 G.U.T., while a natural choice of the coset space radii predicts light Higgs masses a la Coleman-Weinberg

  8. On approach to double asymptotic scaling at low x

    International Nuclear Information System (INIS)

    Choudhury, D.K.

    1994-10-01

    We obtain the finite x correlations to the gluon structure function which exhibits double asymptotic scaling at low x. The technique used is the GLAP equation for gluon approximated at low x by a Taylor expansion. (author). 27 refs

  9. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  10. Application of a computationally efficient geostatistical approach to characterizing variably spaced water-table data

    International Nuclear Information System (INIS)

    Quinn, J.J.

    1996-01-01

    Geostatistical analysis of hydraulic head data is useful in producing unbiased contour plots of head estimates and relative errors. However, at most sites being characterized, monitoring wells are generally present at different densities, with clusters of wells in some areas and few wells elsewhere. The problem that arises when kriging data at different densities is in achieving adequate resolution of the grid while maintaining computational efficiency and working within software limitations. For the site considered, 113 data points were available over a 14-mi 2 study area, including 57 monitoring wells within an area of concern of 1.5 mi 2 . Variogram analyses of the data indicate a linear model with a negligible nugget effect. The geostatistical package used in the study allows a maximum grid of 100 by 100 cells. Two-dimensional kriging was performed for the entire study area with a 500-ft grid spacing, while the smaller zone was modeled separately with a 100-ft spacing. In this manner, grid cells for the dense area and the sparse area remained small relative to the well separation distances, and the maximum dimensions of the program were not exceeded. The spatial head results for the detailed zone were then nested into the regional output by use of a graphical, object-oriented database that performed the contouring of the geostatistical output. This study benefitted from the two-scale approach and from very fine geostatistical grid spacings relative to typical data separation distances. The combining of the sparse, regional results with those from the finer-resolution area of concern yielded contours that honored the actual data at every measurement location. The method applied in this study can also be used to generate reproducible, unbiased representations of other types of spatial data

  11. Scale-dependent Patterns in One-dimensional Fracture Spacing and Aperture Data

    Science.gov (United States)

    Roy, A.; Perfect, E.

    2013-12-01

    One-dimensional scanline data about fracture spacing and size attributes such as aperture or length are mostly considered in separate studies that compute the cumulative frequency of these attributes without regard to their actual spatial sequence. In a previous study, we showed that spacing data can be analyzed using lacunarity to identify whether fractures occur in clusters. However, to determine if such clusters also contain the largest fractures in terms of a size attribute such as aperture, it is imperative that data about the size attribute be integrated with information about fracture spacing. While for example, some researchers have considered aperture in conjunction with spacing, their analyses were either applicable only to a specific type of data (e.g. multifractal) or failed to characterize the data at different scales. Lacunarity is a technique for analyzing multi-scale non-binary data and is ideally-suited for characterizing scanline data with spacing and aperture values. We present a technique that can statistically delineate the relationship between size attributes and spatial clustering. We begin by building a model scanline that has complete partitioning of fractures with small and large apertures between the intercluster regions and clusters. We demonstrate that the ratio of lacunarity for this model to that of its counterpart for a completely randomized sequence of apertures can be used to determine whether large-aperture fractures preferentially occur next to each other. The technique is then applied to two natural fracture scanline datasets, one with most of the large apertures occurring in fracture clusters, and the other with more randomly-spaced fractures, without any specific ordering of aperture values. The lacunarity ratio clearly discriminates between these two datasets and, in the case of the first example, it is also able to identify the range of scales over which the widest fractures are clustered. The technique thus developed for

  12. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Biased Tracers in Redshift Space in the EFT of Large-Scale Structure

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.

    2016-10-28

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.

  14. Tracking and visualization of space-time activities for a micro-scale flu transmission study.

    Science.gov (United States)

    Qi, Feng; Du, Fei

    2013-02-07

    Infectious diseases pose increasing threats to public health with increasing population density and more and more sophisticated social networks. While efforts continue in studying the large scale dissemination of contagious diseases, individual-based activity and behaviour study benefits not only disease transmission modelling but also the control, containment, and prevention decision making at the local scale. The potential for using tracking technologies to capture detailed space-time trajectories and model individual behaviour is increasing rapidly, as technological advances enable the manufacture of small, lightweight, highly sensitive, and affordable receivers and the routine use of location-aware devices has become widespread (e.g., smart cellular phones). The use of low-cost tracking devices in medical research has also been proved effective by more and more studies. This study describes the use of tracking devices to collect data of space-time trajectories and the spatiotemporal processing of such data to facilitate micro-scale flu transmission study. We also reports preliminary findings on activity patterns related to chances of influenza infection in a pilot study. Specifically, this study employed A-GPS tracking devices to collect data on a university campus. Spatiotemporal processing was conducted for data cleaning and segmentation. Processed data was validated with traditional activity diaries. The A-GPS data set was then used for visual explorations including density surface visualization and connection analysis to examine space-time activity patterns in relation to chances of influenza infection. When compared to diary data, the segmented tracking data demonstrated to be an effective alternative and showed greater accuracies in time as well as the details of routes taken by participants. A comparison of space-time activity patterns between participants who caught seasonal influenza and those who did not revealed interesting patterns. This study

  15. Approach to design space from retrospective quality data.

    Science.gov (United States)

    Puñal Peces, Daniel; García-Montoya, Encarna; Manich, Albert; Suñé-Negre, Josep Maria; Pérez-Lozano, Pilar; Miñarro, Montse; Ticó, Josep Ramon

    2016-01-01

    Nowadays, the entire manufacturing process is based on the current GMPs, which emphasize the reproducibility of the process, and companies have a lot of recorded data about their processes. The establishment of the design space (DS) from retrospective data for a wet compression process. A design of experiments (DoE) with historical data from 4 years of industrial production has been carried out using the experimental factors as the results of the previous risk analysis and eight key parameters (quality specifications) that encompassed process and quality control data. Software Statgraphics 5.0 was applied, and data were processed to obtain eight DS as well as their safe and working ranges. Experience shows that it is possible to determine DS retrospectively, being the greatest difficulty in handling and processing of high amounts of data; however, the practicality of this study is very interesting as it let have the DS with minimal investment in experiments since actual production batch data are processed statistically.

  16. Space and Time as Relations: The Theoretical Approach of Leibniz

    Directory of Open Access Journals (Sweden)

    Basil Evangelidis

    2018-04-01

    Full Text Available The epistemological rupture of Copernicus, the laws of planetary motions of Kepler, the comprehensive physical observations of Galileo and Huygens, the conception of relativity, and the physical theory of Newton were components of an extremely fertile and influential cognitive environment that prompted the restless Leibniz to shape an innovative theory of space and time. This theory expressed some of the concerns and intuitions of the scientific community of the seventeenth century, in particular the scientific group of the Academy of Sciences of Paris, but remained relatively unknown until the twentieth century. After Einstein, however, the relational theory of Leibniz gained wider respect and fame. The aim of this article is to explain how Leibniz foresaw relativity, through his critique of contemporary mechanistic philosophy.

  17. Advanced free space optics (FSO) a systems approach

    CERN Document Server

    Majumdar, Arun K

    2015-01-01

    This book provides a comprehensive, unified tutorial covering the most recent advances in the technology of free-space optics (FSO). It is an all-inclusive source of information on the fundamentals of FSO as well as up-to-date information on the state-of-the-art in technologies available today. This text is intended for graduate students, and will also be useful for research scientists and engineers with an interest in the field. FSO communication is a practical solution for creating a three dimensional global broadband communications grid, offering bandwidths far beyond what is possible in the Radio Frequency (RF) range. However, the attributes of atmospheric turbulence and scattering impose perennial limitations on availability and reliability of FSO links. From a systems point-of-view, this groundbreaking book provides a thorough understanding of channel behavior, which can be used to design and evaluate optimum transmission techniques that operate under realistic atmospheric conditions. Topics addressed...

  18. Finite frequency shear wave splitting tomography: a model space search approach

    Science.gov (United States)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  19. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  20. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    Science.gov (United States)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  1. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Science.gov (United States)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  2. Accessibility of green space in urban areas: an examination of various approaches to measure it

    OpenAIRE

    Zhang, Xin

    2007-01-01

    In the present research, we attempt to improve the methods used for measuring accessibility of green spaces by combining two components of accessibility-distance and demand relative to supply. Three modified approaches (Joseph and Bantock gravity model measure, the two-step floating catchment area measure and a measure based on kernel densities) will be applied for measuring accessibility to green spaces. We select parks and public open spaces (metropolitan open land) of south London as a cas...

  3. A state space approach for the eigenvalue problem of marine risers

    KAUST Repository

    Alfosail, Feras; Nayfeh, Ali H.; Younis, Mohammad I.

    2017-01-01

    A numerical state-space approach is proposed to examine the natural frequencies and critical buckling limits of marine risers. A large axial tension in the riser model causes numerical limitations. These limitations are overcome by using

  4. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  5. ANALYSIS OF RADAR AND OPTICAL SPACE BORNE DATA FOR LARGE SCALE TOPOGRAPHICAL MAPPING

    Directory of Open Access Journals (Sweden)

    W. Tampubolon

    2015-03-01

    Full Text Available Normally, in order to provide high resolution 3 Dimension (3D geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term “Rapid Mapping”. In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar or the usage of the GCPs in both, the optical and the

  6. Mapping social values for urban green spaces using Public Participation GIS: the influence of spatial scale and implications for landscape planning.

    Science.gov (United States)

    Ives, Christopher

    2015-04-01

    Measuring social values for landscapes is an emerging field of research and is critical to the successful management of urban ecosystems. Green open space planning has traditionally relied on rigid standards and metrics without considering the physical requirements of green spaces that are valued for different reasons and by different people. Relating social landscape values to key environmental variables provides a much stronger evidence base for planning landscapes that are both socially desirable and environmentally sustainable. This study spatially quantified residents' values for green space in the Lower Hunter Valley of New South Wales, Australia by enabling participants to mark their values for specific open spaces on interactive paper maps. The survey instrument was designed to evaluate the effect of spatial scale by providing maps of residents' local area at both suburb and municipality scales. The importance of open space values differed depending on whether they were indicated via marker dots or reported on in a general aspatial sense. This suggests that certain open space functions were inadequately provided for in the local area (specifically, cultural significance and health/therapeutic value). Additionally, all value types recorded a greater abundance of marker dots at the finer (suburb) scale compared to the coarser (municipality) scale, but this pattern was more pronounced for some values than others (e.g. physical exercise value). Finally, significant relationships were observed between the abundance of value marker dots in parks and their environmental characteristics (e.g. percentage of vegetation). These results have interesting implications when considering the compatibility between different functions of green spaces and how planners can incorporate information about social values with more traditional approaches to green space planning.

  7. Scaling production and improving efficiency in DEA: an interactive approach

    Science.gov (United States)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  8. A Belief-Space Approach to Integrated Intelligence - Research Area 10.3: Intelligent Networks

    Science.gov (United States)

    2017-12-05

    A Belief-Space Approach to Integrated Intelligence- Research Area 10.3: Intelligent Networks The views, opinions and/or findings contained in this...Technology (MIT) Title: A Belief-Space Approach to Integrated Intelligence- Research Area 10.3: Intelligent Networks Report Term: 0-Other Email: tlp...students presented progress and received feedback from the research group . o wrote papers on their research and submitted them to leading conferences

  9. Some applications of nanometer scale structures for current and future X-ray space research

    DEFF Research Database (Denmark)

    Christensen, Finn Erland; Abdali, S; Frederiksen, P K

    1994-01-01

    Nanometer scale structures such as multilayers, gratings and natural crystals are playing an increasing role in spectroscopic applications for X-ray astrophysics. A few examples are briefly described as an introduction to current and planned applications pursued at the Danish Space Research...... Institute in collaboration with the FOM Institute for Plasma Physics, Nieuwegein, the Max-Planck-Institut für Extraterrestrische Physik, Aussenstelle Berlin, the Space Research Institute, Russian Academy of Sciences, the Smithsonian Astrophysical Observatory, Ovonics Synthetic Materials Company and Lawrence...... Livermore National Laboratory. These examples include : 1. the application of multilayered Si crystals for simultaneous spectroscopy in two energy bands one centred around the SK-emission near 2.45 keV and the other below the CK absorption edge at 0.284 keV; 2. the use of in-depth graded period multilayer...

  10. Coordination between Subway and Urban Space: A Networked Approach

    Directory of Open Access Journals (Sweden)

    Lei Mao

    2014-05-01

    Full Text Available This paper selects Changsha as a case study and constructs the models of the subway network and the urban spatial network by using planning data. In the network models, the districts of Changsha are regarded as nodes and the connections between each pair of districts are regarded as edges. The method is based on quantitative analysis of the node weights and the edge weights, which are defined in the complex network theory. And the structures of subway and urban space are visualized in the form of networks. Then, through analyzing the discrepancy coefficients of the corresponding nodes and edges, the paper carries out a comparison between the two networks to evaluate the coordination. The results indicate that only 21.4% of districts and 13.2% of district connections have a rational coordination. Finally, the strategies are put forward for optimization, which suggest adjusting subway transit density, regulating land-use intensity and planning new mass transits for the uncoordinated parts.

  11. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  12. NASTRAN analysis of the 1/8-scale space shuttle dynamic model

    Science.gov (United States)

    Bernstein, M.; Mason, P. W.; Zalesak, J.; Gregory, D. J.; Levy, A.

    1973-01-01

    The space shuttle configuration has more complex structural dynamic characteristics than previous launch vehicles primarily because of the high model density at low frequencies and the high degree of coupling between the lateral and longitudinal motions. An accurate analytical representation of these characteristics is a primary means for treating structural dynamics problems during the design phase of the shuttle program. The 1/8-scale model program was developed to explore the adequacy of available analytical modeling technology and to provide the means for investigating problems which are more readily treated experimentally. The basic objectives of the 1/8-scale model program are: (1) to provide early verification of analytical modeling procedures on a shuttle-like structure, (2) to demonstrate important vehicle dynamic characteristics of a typical shuttle design, (3) to disclose any previously unanticipated structural dynamic characteristics, and (4) to provide for development and demonstration of cost effective prototype testing procedures.

  13. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  14. A Cost Effective System Design Approach for Critical Space Systems

    Science.gov (United States)

    Abbott, Larry Wayne; Cox, Gary; Nguyen, Hai

    2000-01-01

    NASA-JSC required an avionics platform capable of serving a wide range of applications in a cost-effective manner. In part, making the avionics platform cost effective means adhering to open standards and supporting the integration of COTS products with custom products. Inherently, operation in space requires low power, mass, and volume while retaining high performance, reconfigurability, scalability, and upgradability. The Universal Mini-Controller project is based on a modified PC/104-Plus architecture while maintaining full compatibility with standard COTS PC/104 products. The architecture consists of a library of building block modules, which can be mixed and matched to meet a specific application. A set of NASA developed core building blocks, processor card, analog input/output card, and a Mil-Std-1553 card, have been constructed to meet critical functions and unique interfaces. The design for the processor card is based on the PowerPC architecture. This architecture provides an excellent balance between power consumption and performance, and has an upgrade path to the forthcoming radiation hardened PowerPC processor. The processor card, which makes extensive use of surface mount technology, has a 166 MHz PowerPC 603e processor, 32 Mbytes of error detected and corrected RAM, 8 Mbytes of Flash, and I Mbytes of EPROM, on a single PC/104-Plus card. Similar densities have been achieved with the quad channel Mil-Std-1553 card and the analog input/output cards. The power management built into the processor and its peripheral chip allows the power and performance of the system to be adjusted to meet the requirements of the application, allowing another dimension to the flexibility of the Universal Mini-Controller. Unique mechanical packaging allows the Universal Mini-Controller to accommodate standard COTS and custom oversized PC/104-Plus cards. This mechanical packaging also provides thermal management via conductive cooling of COTS boards, which are typically

  15. Feasibility analysis of large length-scale thermocapillary flow experiment for the International Space Station

    Science.gov (United States)

    Alberts, Samantha J.

    The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.

  16. Using Citizen Science Observations to Model Species Distributions Over Space, Through Time, and Across Scales

    Science.gov (United States)

    Kelling, S.

    2017-12-01

    The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.

  17. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  18. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    climate changes indexes of such classification have been developed which included: statistical significance or non-significance of climate changes, direction of climate change tendency in conditions of its statistical significance, assessment of its contribution and a form of the tendency if it enough complex over the time. In detected homogeneous regions the spatial generalization is fulfilled which includes different approach in dependence on regularities of spatial features. They are: an averaging, development of spatial distribution functions or spatial simulation. New spatial linear model has been developed and suggested which includes two coefficients connected with a gradient and a level of space field and one parameter which characterizes the internal inhomogeneity of the field. The last step of the suggested methodology is a using of the detected point and field climate changes for determination of design hydrological value. Traditional design characteristics (as one random event in each year) as well as new ones (POT, rare extremes, characteristics of cycles of climate variability), which can be rare or often than one value per year have been chosen. Approach and methods for using of detected climate changes in hydrological computations have been developed. Application of developed methods has been shown on some examples of different hydrometeorological characteristics (floods, low flow, annual runoff, monthly and annual temperature and precipitation) in some regions with different climatic conditions.(Author)

  19. Quantifying space, understanding minds: A visual summary approach

    Directory of Open Access Journals (Sweden)

    Mark Simpson

    2017-06-01

    Full Text Available This paper presents an illustrated, validated taxonomy of research that compares spatial measures to human behavior. Spatial measures quantify the spatial characteristics of environments, such as the centrality of intersections in a street network or the accessibility of a room in a building from all the other rooms. While spatial measures have been of interest to spatial sciences, they are also of importance in the behavioral sciences for use in modeling human behavior. A high correlation between values for spatial measures and specific behaviors can provide insights into an environment's legibility, and contribute to a deeper understanding of human spatial cognition. Research in this area takes place in several domains, which makes a full understanding of existing literature difficult. To address this challenge, we adopt a visual summary approach. Literature is analyzed, and recurring topics are identified and validated with independent inter-rater agreement tasks in order to create a robust taxonomy for spatial measures and human behavior. The taxonomy is then illustrated with a visual representation that allows for at-a-glance visual access to the content of individual research papers in a corpus. A public web interface has been created that allows interested researchers to add to the database and create visual summaries for their research papers using our taxonomy.

  20. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  1. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    Science.gov (United States)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios

  2. A behavioral approach to shared mapping of peripersonal space between oneself and others.

    Science.gov (United States)

    Teramoto, Wataru

    2018-04-03

    Recent physiological studies have showed that some visuotactile brain areas respond to other's peripersonal spaces (PPS) as they would their own. This study investigates this PPS remapping phenomenon in terms of human behavior. Participants placed their left hands on a tabletop screen where visual stimuli were projected. A vibrotactile stimulator was attached to the tip of their index finger. While a white disk approached or receded from the hand in the participant's near or far space, the participant was instructed to quickly detect a target (vibrotactile stimulation, change in the moving disk's color or both). When performing this task alone, the participants exhibited shorter detection times when the disk approached the hand in their near space. In contrast, when performing the task with a partner across the table, the participants exhibited shorter detection times both when the disk approached their own hand in their near space and when it approached the partner's hand in the partner's near space but the participants' far space. This phenomenon was also observed when the body parts from which the visual stimuli approached/receded differed between the participant and partner. These results suggest that humans can share PPS representations and/or body-derived attention/arousal mechanisms with others.

  3. Quantitative approach to small-scale nonequilibrium systems

    DEFF Research Database (Denmark)

    Dreyer, Jakob K; Berg-Sørensen, Kirstine; Oddershede, Lene B

    2006-01-01

    In a nano-scale system out of thermodynamic equilibrium, it is important to account for thermal fluctuations. Typically, the thermal noise contributes fluctuations, e.g., of distances that are substantial in comparison to the size of the system and typical distances measured. If the thermal...... propose an approximate but quantitative way of dealing with such an out-of-equilibrium system. The limits of this approximate description of the escape process are determined through optical tweezers experiments and comparison to simulations. Also, this serves as a recipe for how to use the proposed...

  4. Combining Statistical Methodologies in Water Quality Monitoring in a Hydrological Basin - Space and Time Approaches

    OpenAIRE

    Costa, Marco; A. Manuela Gonçalves

    2012-01-01

    In this work are discussed some statistical approaches that combine multivariate statistical techniques and time series analysis in order to describe and model spatial patterns and temporal evolution by observing hydrological series of water quality variables recorded in time and space. These approaches are illustrated with a data set collected in the River Ave hydrological basin located in the Northwest region of Portugal.

  5. An approach to an acute emotional stress reference scale.

    Science.gov (United States)

    Garzon-Rey, J M; Arza, A; de-la-Camara, C; Lobo, A; Armario, A; Aguilo, J

    2017-06-16

    The clinical diagnosis aims to identify the degree of affectation of the psycho-physical state of the patient as a guide to therapeutic intervention. In stress, the lack of a measurement tool based on a reference makes it difficult to quantitatively assess this degree of affectation. To define and perform a primary assessment of a standard reference in order to measure acute emotional stress from the markers identified as indicators of the degree. Psychometric tests and biochemical variables are, in general, the most accepted stress measurements by the scientific community. Each one of them probably responds to different and complementary processes related to the reaction to a stress stimulus. The reference that is proposed is a weighted mean of these indicators by assigning them relative weights in accordance with a principal components analysis. An experimental study was conducted on 40 healthy young people subjected to the psychosocial stress stimulus of the Trier Social Stress Test in order to perform a primary assessment and consistency check of the proposed reference. The proposed scale clearly differentiates between the induced relax and stress states. Accepting the subjectivity of the definition and the lack of a subsequent validation with new experimental data, the proposed standard differentiates between a relax state and an emotional stress state triggered by a moderate stress stimulus, as it is the Trier Social Stress Test. The scale is robust. Although the variations in the percentage composition slightly affect the score, but they do not affect the valid differentiation between states.

  6. Dynamic simulation of a pilot scale vacuum gas oil hydrocracking unit by the space-time CE/SE method

    Energy Technology Data Exchange (ETDEWEB)

    Sadighi, S.; Ahmad, A. [Institute of Hydrogen Economy, Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Shirvani, M. [Faculty of Chemical Engineering, University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2012-05-15

    This work introduces a modified space-time conservation element/solution element (CE/SE) method for the simulation of the dynamic behavior of a pilot-scale hydrocracking reactor. With this approach, a four-lump dynamic model including vacuum gas oil (VGO), middle distillate, naphtha and gas is solved. The proposed method is capable of handling the stiffness of the partial differential equations resulting from the hydrocracking reactions. To have a better judgment, the model is also solved by the finite difference method (FDM), and the results from both approaches are compared. Initially, the absolute average deviation of the cold dynamic simulation using the CE/SE approach is 8.98 %, which is better than that obtained using the FDM. Then, the stability analysis proves that for achieving an appropriate response from the dynamic model, the Courant number, which is a function of the time step size, mesh size and volume flow rate through the catalytic bed, should be less than 1. Finally, it is found that, following a careful selection of these parameters, the CE/SE solutions to the hydrocracking model can produce higher accuracy than the FDM results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  7. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    We calculate the redshift space correlation function and the power spectrum of density peaks of a Gaussian random field. Our derivation, which is valid on linear scales k≲0.1hMpc-1, is based on the peak biasing relation given by Desjacques [Phys. Rev. DPRVDAQ1550-7998, 78, 103503 (2008)10.1103/PhysRevD.78.103503]. In linear theory, the redshift space power spectrum is Ppks(k,μ)=exp⁡(-f2σvel2k2μ2)[bpk(k)+bvel(k)fμ2]2Pδ(k), where μ is the angle with respect to the line of sight, σvel is the one-dimensional velocity dispersion, f is the growth rate, and bpk(k) and bvel(k) are k-dependent linear spatial and velocity bias factors. For peaks, the value of σvel depends upon the functional form of bvel. When the k dependence is absent from the square brackets and bvel is set to unity, the resulting expression is assumed to describe models where the bias is linear and deterministic, but the velocities are unbiased. The peak model is remarkable because it has unbiased velocities in this same sense—peak motions are driven by dark matter flows—but, in order to achieve this, bvel must be k dependent. We speculate that this is true in general: k dependence of the spatial bias will lead to k dependence of bvel even if the biased tracers flow with the dark matter. Because of the k dependence of the linear bias parameters, standard manipulations applied to the peak model will lead to k-dependent estimates of the growth factor that could erroneously be interpreted as a signature of modified dark energy or gravity. We use the Fisher formalism to show that the constraint on the growth rate f is degraded by a factor of 2 if one allows for a k-dependent velocity bias of the peak type. Our analysis also demonstrates that the Gaussian smoothing term is part and parcel of linear theory. We discuss a simple estimate of nonlinear evolution and illustrate the effect of the peak bias on the redshift space multipoles. For k≲0.1hMpc-1, the peak bias is deterministic but k

  8. A micromechanical approach of suffusion based on a length scale analysis of the grain detachment and grain transport processes.

    Science.gov (United States)

    Wautier, Antoine; Bonelli, Stéphane; Nicot, François

    2017-06-01

    Suffusion is the selective erosion of the finest particles of a soil subjected to an internal flow. Among the four types of internal erosion and piping identified today, suffusion is the least understood. Indeed, there is a lack of micromechanical approaches for identifying the critical microstructural parameters responsible for this process. Based on a discrete element modeling of non cohesive granular assemblies, specific micromechanical tools are developed in a unified framework to account for the two first steps of suffusion, namely the grain detachment and the grain transport processes. Thanks to the use of an enhanced force chain definition and autocorrelation functions the typical lengths scales associated with grain detachment are characterized. From the definition of transport paths based on a graph description of the pore space the typical lengths scales associated with grain transport are recovered. For a uniform grain size distribution, a separation of scales between these two processes exists for the finest particles of a soil

  9. Space, Scale and Languages: Identity Construction of Cross-Boundary Students in a Multilingual University in Hong Kong

    Science.gov (United States)

    Gu, Mingyue Michelle; Tong, Ho Kin

    2012-01-01

    Drawing on the notions of scale and space, this paper investigates identity construction among a group of mainland Chinese cross-boundary students by analysing their language choices and linguistic practices in a multilingual university in Hong Kong. The research illustrates how movement across spaces by these students produces varying index…

  10. New Li-Yau-Hamilton Inequalities for the Ricci Flow via the Space-Time Approach

    OpenAIRE

    Chow, Bennett; Knopf, Dan

    2002-01-01

    We generalize Hamilton's matrix Li-Yau-type Harnack estimate for the Ricci flow by considering the space of all LYH (Li-Yau-Hamilton) quadratics that arise as curvature tensors of space-time connections satisfying the Ricci flow with respect to the natural space-time degenerate metric. As a special case, we employ scaling arguments to derive a linear-type matrix LYH estimate. The new LYH quadratics obtained in this way are associated to the system of the Ricci flow coupled to a 1-form and a 2...

  11. Xanthophyll Cycle In Chromophyte Algae: Variations Over Different Temporal and Space Scales and Their Ecological Implications.

    Science.gov (United States)

    Brunet, C.

    As a response to excess light, algae present photoprotective reactions, resulting in a re- duction of the light harvesting efficiency. One of these reactions involves the so-called xanthophyll-cycle between diadinoxanthin (Dd) and diatoxanthin (Dt) pigments in chlc-containing brown algae, the latter acting as photoprotective avoiding photooxy- dation of LHC. Presence and concentrations of these two xanthophylls are valuable indicators of the light history of algae in the natural environment and can be used to obtain ecological information at different time and space scales. Data are presented from the Mediterranean Sea and the English Channel. At mesoscale, significant rela- tionships between Dt and Dd and physical (light, salinity) or biological (Fv/Fm ratio) data can be drawn, suggesting that they strictly reflect water mass characteristics and behavior. In the Gulf of Naples (Med. Sea), from vertical profiles of photoadaptative index (ratio between Dt and Dd), we can estimate a mixing rate of 0.07 cm.sec-1 in the upper layer. From this velocity, we are able to infer kinetic coefficients for different photophysiological parameters reacting over different time scales within the mixed layer. At the diel scale, this photoadaptative index follows significant oscillations in the upper water column, and equations are found expressing them as function of light and time. Also in this case, mixing rates are estimated, lying around 0.05 cm.sec-1.

  12. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  13. Distribution function approach to redshift space distortions. Part II: N-body simulations

    International Nuclear Information System (INIS)

    Okumura, Teppei; Seljak, Uroš; McDonald, Patrick; Desjacques, Vincent

    2012-01-01

    Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. We present these terms and investigate their contribution to the total as a function of wavevector k. For μ 2 the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ∼ 0.015hMpc −1 , 10% at k ∼ 0.05hMpc −1 at z = 0, while for k > 0.15hMpc −1 they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ 4 term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc −1 . For μ 6 and μ 8 we find it has very little power for k −1 , shooting up by 2–3 orders of magnitude between k −1 and k −1 . We also compare the expansion to the full 2-d P ss (k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of P ss (k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ −1 at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into Fo

  14. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    International Nuclear Information System (INIS)

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  15. Keeping it real: revisiting a real-space approach to running ensembles of cosmological N-body simulations

    International Nuclear Information System (INIS)

    Orban, Chris

    2013-01-01

    In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations

  16. PATTERN CLASSIFICATION APPROACHES TO MATCHING BUILDING POLYGONS AT MULTIPLE SCALES

    Directory of Open Access Journals (Sweden)

    X. Zhang

    2012-07-01

    Full Text Available Matching of building polygons with different levels of detail is crucial in the maintenance and quality assessment of multi-representation databases. Two general problems need to be addressed in the matching process: (1 Which criteria are suitable? (2 How to effectively combine different criteria to make decisions? This paper mainly focuses on the second issue and views data matching as a supervised pattern classification. Several classifiers (i.e. decision trees, Naive Bayes and support vector machines are evaluated for the matching task. Four criteria (i.e. position, size, shape and orientation are used to extract information for these classifiers. Evidence shows that these classifiers outperformed the weighted average approach.

  17. Micro scale spatial relationships in urban studies : The relationship between private and public space and its impact on street life

    NARCIS (Netherlands)

    Van Nes, A.; Lopez, M.J.J.

    2007-01-01

    Research on urban environment by means of space syntax theory and methods tends to focus on macro scale spatial conditions. However, micro scale conditions should not be neglected. In research on street life and dispersal of crime in urban areas, it became inevitable to pay attention to the

  18. Modelling an industrial anaerobic granular reactor using a multi-scale approach.

    Science.gov (United States)

    Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V

    2017-12-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All

  19. Space Station Freedom - Configuration management approach to supporting concurrent engineering and total quality management. [for NASA Space Station Freedom Program

    Science.gov (United States)

    Gavert, Raymond B.

    1990-01-01

    Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed.

  20. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  1. A phase-space approach to atmospheric dynamics based on observational data. Theory and applications

    International Nuclear Information System (INIS)

    Wang Risheng.

    1994-01-01

    This thesis is an attempt to develop systematically a phase-space approach to the atmospheric dynamics based on the theoretical achievement and application experiences in nonlinear time-series analysis. In particular, it is concerned with the derivation of quantities for describing the geometrical structure of the observed dynamics in phase-space (dimension estimation) and the examination of the observed atmospheric fluctuations in the light of phase-space representation. The thesis is, therefore composed of three major parts, i.e. an general survey of the theory of statistical approaches to dynamic systems, the methodology designed for the present study and specific applications with respect to dimension estimation and to a phase-space analysis of the tropical stratospheric quasi-biennial oscillation. (orig./KW)

  2. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  3. Symbols, spaces and materiality: a transmission-based approach to Aegean Bronze Age ritual.

    OpenAIRE

    Briault, C.

    2005-01-01

    This thesis explores the transmission of ritual practices in the second millennium BC Aegean. In contrast to previous approaches, which often overlook gaps in the diachronic record, emphasising continuity in cult practice over very long timescales, it is argued here that through charting the spatial and temporal distributions of three broad material types (cult symbols, spaces and objects), it is possible to document the spread of cult practice over time and space, and, crucially, to monitor ...

  4. Overview of Small and Large-Scale Space Solar Power Concepts

    Science.gov (United States)

    Potter, Seth; Henley, Mark; Howell, Joe; Carrington, Connie; Fikes, John

    2006-01-01

    poles to search for water ice and other frozen volatiles. Near such craters are mountain peaks and highlands that are in near permanent sunlight. Power can be beamed from a collector on a sunlit mountain or crater rim to a rover inside a crater. Near-term applications of space solar power technology can therefore pave the way toward large-scale commercial power from space.

  5. Solar chimney: A sustainable approach for ventilation and building space conditioning

    Directory of Open Access Journals (Sweden)

    Lal, S.,

    2013-03-01

    Full Text Available The residential and commercial buildings demand increase with rapidly growing population. It leads to the vertical growth of the buildings and needs proper ventilation and day-lighting. The natural air ventilation system is not significantly works in conventional structure, so fans and air conditioners are mandatory to meet the proper ventilation and space conditioning. Globally building sector consumed largest energy and utmost consumed in heating, ventilation and space conditioning. This load can be reduced by application of solar chimney and integrated approaches in buildings for heating, ventilation and space conditioning. It is a sustainable approach for these applications in buildings. The authors are reviewed the concept, various method of evaluation, modelings and performance of solar chimney variables, applications and integrated approaches.

  6. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  7. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    Science.gov (United States)

    Li, Dengwang; Li, Hongsheng; Wan, Honglin; Chen, Jinhu; Gong, Guanzhong; Wang, Hongjun; Wang, Liming; Yin, Yong

    2012-08-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5-8% for mono-modality and 10-14% for multi-modality registration under the same condition. Furthermore, clinical application by adaptive

  8. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy

    International Nuclear Information System (INIS)

    Li Dengwang; Wan Honglin; Li Hongsheng; Chen Jinhu; Gong Guanzhong; Yin Yong; Wang Hongjun; Wang Liming

    2012-01-01

    Mutual information (MI) is a well-accepted similarity measure for image registration in medical systems. However, MI-based registration faces the challenges of high computational complexity and a high likelihood of being trapped into local optima due to an absence of spatial information. In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian pyramid representation is one such technique but it suffers from contour diffusion at coarse levels which may lead to unsatisfactory registration results. In this work, a new multi-scale registration framework called edge preserving multiscale registration (EPMR) was proposed based upon an edge preserving total variation L1 norm (TV-L1) scale space representation. TV-L1 scale space is constructed by selecting edges and contours of images according to their size rather than the intensity values of the image features. This ensures more meaningful spatial information with an EPMR framework for MI-based registration. Furthermore, we design an optimal estimation of the TV-L1 parameter in the EPMR framework by training and minimizing the transformation offset between the registered pairs for automated registration in medical systems. We validated our EPMR method on both simulated mono- and multi-modal medical datasets with ground truth and clinical studies from a combined positron emission tomography/computed tomography (PET/CT) scanner. We compared our registration framework with other traditional registration approaches. Our experimental results demonstrated that our method outperformed other methods in terms of the accuracy and robustness for medical images. EPMR can always achieve a small offset value, which is closer to the ground truth both for mono-modality and multi-modality, and the speed can be increased 5–8% for mono-modality and 10–14% for multi-modality registration under the same condition. Furthermore, clinical application by

  9. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  10. The Faster, Better, Cheaper Approach to Space Missions: An Engineering Management Assessment

    Science.gov (United States)

    Hamaker, Joe

    2000-01-01

    This paper describes, in viewgraph form, the faster, better, cheaper approach to space missions. The topics include: 1) What drives "Faster, Better, Cheaper"? 2) Why Space Programs are Costly; 3) Background; 4) Aerospace Project Management (Old Culture); 5) Aerospace Project Management (New Culture); 6) Scope of Analysis Limited to Engineering Management Culture; 7) Qualitative Analysis; 8) Some Basic Principles of the New Culture; 9) Cause and Effect; 10) "New Ways of Doing Business" Survey Results; 11) Quantitative Analysis; 12) Recent Space System Cost Trends; 13) Spacecraft Dry Weight Trend; 14) Complexity Factor Trends; 15) Cost Normalization; 16) Cost Normalization Algorithm; 17) Unnormalized Cost vs. Normalized Cost; and 18) Concluding Observations.

  11. Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach

    Science.gov (United States)

    Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.

    2018-03-01

    Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.

  12. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  13. City and sea margins. Porto’s Marginal as scale and measure of new spaces

    Directory of Open Access Journals (Sweden)

    Giuseppe Parità

    2014-06-01

    Full Text Available The city has always been confronting with its own end and the beginning of the water system. Among the different kind of margin areas, the ones that border the cities on their watersides are particularly interesting. These new liminal territories are rich in variety and differences and are set up of several elements made of different morphologies that should be carefully read and interpreted: the need of re-thinking the morphological elements that mark an urban edge leads to the identification of several shapes and forms of the water borderlands. Borders, limits, boundaries, edges, margin areas - usually considered as an obstacle to the construction of the city - turn themselves as new possible “design materials” for building that ambiguous distance between city and the sea. The article aims to focus on the case-study of Porto’s Marginal that well explain how many ways a city can live its water edges. On a large scale, it is configured as a strip of 15 kilometers of public space. Within this continuity, the different extent of the distance between city and water leads to reflect on the different types of relationships (and therefore projects between the end of one side and the beginning of another. For Porto, those are not only urban parts, but also different geographical parts (sea, rivers, topography that distance puts in relation through the design sometimes of the line, at time of the border or of a surface. So, the analysis of these heterogeneous but continuous projects aim to focus on the several techniques of urban composition to build contemporary public spaces. On one hand they give form to a continuous “public figure”, on the other hand each one of the project can be considered as part of a “atlas” of liminal places, giving form to public spaces

  14. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  15. Lateral skull base approaches in the management of benign parapharyngeal space tumors.

    Science.gov (United States)

    Prasad, Sampath Chandra; Piccirillo, Enrico; Chovanec, Martin; La Melia, Claudio; De Donato, Giuseppe; Sanna, Mario

    2015-06-01

    To evaluate the role of lateral skull base approaches in the management of benign parapharyngeal space tumors and to propose an algorithm for their surgical approach. Retrospective study of patients with benign parapharyngeal space tumors. The clinical features, radiology and preoperative management of skull base neurovasculature, the surgical approaches and overall results were recorded. 46 patients presented with 48 tumors. 12 were prestyloid and 36 poststyloid. 19 (39.6%) tumors were paragangliomas, 15 (31.25%) were schwannomas and 11 (23%) were pleomorphic adenomas. Preoperative embolization was performed in 19, stenting of the internal carotid artery in 4 and permanent balloon occlusion in 2 patients. 19 tumors were approached by the transcervical, 13 by transcervical-transparotid, 5 by transcervical-transmastoid, 6, 1 and 2 tumors by the infratemporal fossa approach types A, B and D, respectively. Total radical tumor removal was achieved in 46 (96%) of the cases. Lateral skull base approaches have an advantage over other approaches in the management of benign tumors of the parapharyngeal space due to the fact that they provide excellent exposure with less morbidity. The use of microscope combined with bipolar cautery reduces morbidity. Stenting of internal carotid artery gives a chance for complete tumor removal with arterial preservation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Scale-model Experiment of Magnetoplasma Sail for Future Deep Space Missions

    International Nuclear Information System (INIS)

    Funaki, Ikkoh; Yamakawa, Hiroshi; Ueno, Kazuma; Kimura, Toshiyuki; Ayabe, Tomohiro; Horisawa, Hideyuki

    2008-01-01

    When Magnetic sail (MagSail) spacecraft is operated in space, the supersonic solar wind plasma flow is blocked by an artificially produced magnetic cavity to accelerate the spacecraft in the direction leaving the Sun. To evaluate the momentum transferring process from the solar wind to the coil onboard the MagSail spacecraft, we arranged a laboratory experiment of MagSail spacecraft. Based on scaling considerations, a solenoidal coil was immersed into the plasma flow from a magnetoplasmadynamic arcjet in a quasi-steady mode of about 1 ms duration. In this setup, it is confirmed that a magnetic cavity, which is similar to that of the geomagnetic field, was formed around the coil to produce thrust in the ion Larmor scale interaction. Also, the controllability of magnetic cavity size by a plasma jet from inside the coil of MagSail is demonstrated, although the thrust characteristic of the MagSail with plasma jet, which is so called plasma sail, is to be clarified in our next step

  17. Pre-Big Bang, space-time structure, asymptotic Universe. Spinorial space-time and a new approach to Friedmann-like equations

    Science.gov (United States)

    Gonzalez-Mestres, Luis

    2014-04-01

    Planck and other recent data in Cosmology and Particle Physics can open the way to controversial analyses concerning the early Universe and its possible ultimate origin. Alternatives to standard cosmology include pre-Big Bang approaches, new space-time geometries and new ultimate constituents of matter. Basic issues related to a possible new cosmology along these lines clearly deserve further exploration. The Planck collaboration reports an age of the Universe t close to 13.8 Gyr and a present ratio H between relative speeds and distances at cosmic scale around 67.3 km/s/Mpc. The product of these two measured quantities is then slightly below 1 (about 0.95), while it can be exactly 1 in the absence of matter and cosmological constant in patterns based on the spinorial space-time we have considered in previous papers. In this description of space-time we first suggested in 1996-97, the cosmic time t is given by the modulus of a SU(2) spinor and the Lundmark-Lemaître-Hubble (LLH) expansion law turns out to be of purely geometric origin previous to any introduction of standard matter and relativity. Such a fundamental geometry, inspired by the role of half-integer spin in Particle Physics, may reflect an equilibrium between the dynamics of the ultimate constituents of matter and the deep structure of space and time. Taking into account the observed cosmic acceleration, the present situation suggests that the value of 1 can be a natural asymptotic limit for the product H t in the long-term evolution of our Universe up to possible small corrections. In the presence of a spinorial space-time geometry, no ad hoc combination of dark matter and dark energy would in any case be needed to get an acceptable value of H and an evolution of the Universe compatible with observation. The use of a spinorial space-time naturally leads to unconventional properties for the space curvature term in Friedmann-like equations. It therefore suggests a major modification of the standard

  18. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  19. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-01

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described

  20. Urban Multisensory Laboratory, AN Approach to Model Urban Space Human Perception

    Science.gov (United States)

    González, T.; Sol, D.; Saenz, J.; Clavijo, D.; García, H.

    2017-09-01

    An urban sensory lab (USL or LUS an acronym in Spanish) is a new and avant-garde approach for studying and analyzing a city. The construction of this approach allows the development of new methodologies to identify the emotional response of public space users. The laboratory combines qualitative analysis proposed by urbanists and quantitative measures managed by data analysis applications. USL is a new approach to go beyond the borders of urban knowledge. The design thinking strategy allows us to implement methods to understand the results provided by our technique. In this first approach, the interpretation is made by hand. However, our goal is to combine design thinking and machine learning in order to analyze the qualitative and quantitative data automatically. Now, the results are being used by students from the Urbanism and Architecture courses in order to get a better understanding of public spaces in Puebla, Mexico and its interaction with people.

  1. Comparing Laser Interferometry and Atom Interferometry Approaches to Space-Based Gravitational-Wave Measurement

    Science.gov (United States)

    Baker, John; Thorpe, Ira

    2012-01-01

    Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.

  2. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  3. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    Science.gov (United States)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  4. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  5. State-space approach for evaluating the soil-plant-atmosphere system

    International Nuclear Information System (INIS)

    Timm, L.C.; Reichardt, K.; Cassaro, F.A.M.; Tominaga, T.T.; Bacchi, O.O.S.; Oliveira, J.C.M.; Dourado-Neto, D.

    2004-01-01

    Using as examples one sugarcane and one forage oat experiment, both carried out in the State of Sao Paulo, Brazil, this chapter presents recent state-space approaches used to evaluate the relation between soil and plant properties. A contrast is made between classical statistics methodologies that do not take into account the sampling position coordinates, and the more recently used methodologies which include the position coordinates, and allow a better interpretation of the field-sampled data. Classical concepts are first introduced, followed by spatially referenced methodologies like the autocorrelation function, the cross correlation function, and the state-space approach. Two variations of the state-space approach are given: one emphasizes the evolution of the state system while the other based on the bayesian formulation emphasizes the evolution of the estimated observations. It is concluded that these state-space analyses using dynamic regression models improve data analyses and are therefore recommended for analyzing time and space data series related to the performance of a given soil-plant-atmosphere system. (author)

  6. National-Scale Hydrologic Classification & Agricultural Decision Support: A Multi-Scale Approach

    Science.gov (United States)

    Coopersmith, E. J.; Minsker, B.; Sivapalan, M.

    2012-12-01

    Classification frameworks can help organize catchments exhibiting similarity in hydrologic and climatic terms. Focusing this assessment of "similarity" upon specific hydrologic signatures, in this case the annual regime curve, can facilitate the prediction of hydrologic responses. Agricultural decision-support over a diverse set of catchments throughout the United States depends upon successful modeling of the wetting/drying process without necessitating separate model calibration at every site where such insights are required. To this end, a holistic classification framework is developed to describe both climatic variability (humid vs. arid, winter rainfall vs. summer rainfall) and the draining, storing, and filtering behavior of any catchment, including ungauged or minimally gauged basins. At the national scale, over 400 catchments from the MOPEX database are analyzed to construct the classification system, with over 77% of these catchments ultimately falling into only six clusters. At individual locations, soil moisture models, receiving only rainfall as input, produce correlation values in excess of 0.9 with respect to observed soil moisture measurements. By deploying physical models for predicting soil moisture exclusively from precipitation that are calibrated at gauged locations, overlaying machine learning techniques to improve these estimates, then generalizing the calibration parameters for catchments in a given class, agronomic decision-support becomes available where it is needed rather than only where sensing data are located.lassifications of 428 U.S. catchments on the basis of hydrologic regime data, Coopersmith et al, 2012.

  7. Approaching control for tethered space robot based on disturbance observer using super twisting law

    Science.gov (United States)

    Hu, Yongxin; Huang, Panfeng; Meng, Zhongjie; Wang, Dongke; Lu, Yingbo

    2018-05-01

    Approaching control is a key mission for the tethered space robot to perform the task of removing space debris. But the uncertainties of the TSR such as the change of model parameter have an important effect on the approaching mission. Considering the space tether and the attitude of the gripper, the dynamic model of the TSR is derived using Lagrange method. Then a disturbance observer is designed to estimate the uncertainty based on STW control method. Using the disturbance observer, a controller is designed, and the performance is compared with the dynamic inverse controller which turns out that the proposed controller performs better. Numerical simulation validates the feasibility of the proposed controller on the position and attitude tracking of the TSR.

  8. An Effective Approach Control Scheme for the Tethered Space Robot System

    Directory of Open Access Journals (Sweden)

    Zhongjie Meng

    2014-09-01

    Full Text Available The tethered space robot system (TSR, which is composed of a platform, a gripper and a space tether, has great potential in future space missions. Given the relative motion among the platform, tether, gripper and the target, an integrated approach model is derived. Then, a novel coordinated approach control scheme is presented, in which the tether tension, thrusters and the reaction wheel are all utilized. It contains the open-loop trajectory optimization, the feedback trajectory control and attitude control. The numerical simulation results show that the rendezvous between TSR and the target can be realized by the proposed coordinated control scheme, and the propellant consumption is efficiently reduced. Moreover, the control scheme performs well in the presence of the initial state's perturbations, actuator characteristics and sensor errors.

  9. Activity markers and household space in Swahili urban contexts: An integrated geoarchaeological approach

    DEFF Research Database (Denmark)

    Wynne-Jones, Stephanie; Sulas, Federica

    , this paper draws from recent work at a Swahili urban site to illustrate the potential and challenges of an integrated geoarchaeological approach to the study of household space. The site of Songo Mnara (14th–16thc. AD) thrived as a Swahili stonetown off the coast of Tanzania. Here, our work has concentrated...

  10. Learning in Earth and Space Science: A Review of Conceptual Change Instructional Approaches

    Science.gov (United States)

    Mills, Reece; Tomas, Louisa; Lewthwaite, Brian

    2016-01-01

    In response to calls for research into effective instruction in the Earth and space sciences, and to identify directions for future research, this systematic review of the literature explores research into instructional approaches designed to facilitate conceptual change. In total, 52 studies were identified and analyzed. Analysis focused on the…

  11. Concept of Draft International Standard for a Unified Approach to Space Program Quality Assurance

    Science.gov (United States)

    Stryzhak, Y.; Vasilina, V.; Kurbatov, V.

    2002-01-01

    For want of the unified approach to guaranteed space project and product quality assurance, implementation of many international space programs has become a challenge. Globalization of aerospace industry and participation of various international ventures with diverse quality assurance requirements in big international space programs requires for urgent generation of unified international standards related to this field. To ensure successful fulfillment of space missions, aerospace companies should design and process reliable and safe products with properties complying or bettering User's (or Customer's) requirements. Quality of the products designed or processed by subcontractors (or other suppliers) should also be in compliance with the main user (customer)'s requirements. Implementation of this involved set of unified requirements will be made possible by creating and approving a system (series) of international standards under a generic title Space Product Quality Assurance based on a system consensus principle. Conceptual features of the baseline standard in this system (series) should comprise: - Procedures for ISO 9000, CEN and ECSS requirements adaptation and introduction into space product creation, design, manufacture, testing and operation; - Procedures for quality assurance at initial (design) phases of space programs, with a decision on the end product made based on the principle of independence; - Procedures to arrange incoming inspection of products delivered by subcontractors (including testing, audit of supplier's procedures, review of supplier's documentation), and space product certification; - Procedures to identify materials and primary products applied; - Procedures for quality system audit at the component part, primary product and materials supplier facilities; - Unified procedures to form a list of basic performances to be under configuration management; - Unified procedures to form a list of critical space product components, and unified

  12. An integrated mission approach to the space exploration initiative will ensure success

    International Nuclear Information System (INIS)

    Coomes, E.P.; Dagle, J.E.; Bamberger, J.A.; Noffsinger, K.E.

    1991-01-01

    The direction of the American space program, as defined by President Bush and the National Commission on Space, is to expand human presence into the solar system. Landing an American on Mars by the 50th anniversary of the Apollo 11 lunar landing is the goal. This challenge has produced a level of excitement among young Americans not seen for nearly three decades. The exploration and settlement of the space frontier will occupy the creative thoughts and energies of generations of Americans well into the next century. The return of Americans to the moon and beyond must be viewed as a national effort with strong public support if it is to become a reality. Key to making this an actuality is the mission approach selected. Developing a permanent presence in space requires a continual stepping outward from Earch in a logical progressive manner. If we seriously plan to go and to stay, then not only must we plan what we are to do and how we are to do it, we must address the logistic support infrastructure that will allow us to stay there once we arrive. A fully integrated approach to mission planning is needed if the Space exploration Initiative (SEI) is to be successful. Only in this way can a permanent human presence in space be sustained. An integrated infrastructure approach would reduce the number of new systems and technologies requiring development. The resultant horizontal commonality of systems and hardware would reduce the direct economic impact of SEI while an early return on investment through technology spin-offs would be an economic benefit by greatly enhancing our international technical competitiveness. If the exploration, development, and colonization of space is to be affordable and acceptable, careful consideration must be given to such things as ''return on investment'' and ''commercial product potential'' of the technologies developed

  13. State space model extraction of thermohydraulic systems – Part I: A linear graph approach

    International Nuclear Information System (INIS)

    Uren, K.R.; Schoor, G. van

    2013-01-01

    Thermohydraulic simulation codes are increasingly making use of graphical design interfaces. The user can quickly and easily design a thermohydraulic system by placing symbols on the screen resembling system components. These components can then be connected to form a system representation. Such system models may then be used to obtain detailed simulations of the physical system. Usually this kind of simulation models are too complex and not ideal for control system design. Therefore, a need exists for automated techniques to extract lumped parameter models useful for control system design. The goal of this first paper, in a two part series, is to propose a method that utilises a graphical representation of a thermohydraulic system, and a lumped parameter modelling approach, to extract state space models. In this methodology each physical domain of the thermohydraulic system is represented by a linear graph. These linear graphs capture the interaction between all components within and across energy domains – hydraulic, thermal and mechanical. These linear graphs are analysed using a graph-theoretic approach to derive reduced order state space models. These models capture the dominant dynamics of the thermohydraulic system and are ideal for control system design purposes. The proposed state space model extraction method is demonstrated by considering a U-tube system. A non-linear state space model is extracted representing both the hydraulic and thermal domain dynamics of the system. The simulated state space model is compared with a Flownex ® model of the U-tube. Flownex ® is a validated systems thermal-fluid simulation software package. - Highlights: • A state space model extraction methodology based on graph-theoretic concepts. • An energy-based approach to consider multi-domain systems in a common framework. • Allow extraction of transparent (white-box) state space models automatically. • Reduced order models containing only independent state

  14. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    Science.gov (United States)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  15. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  16. Swamp Works: A New Approach to Develop Space Mining and Resource Extraction Technologies at the National Aeronautics Space Administration (NASA) Kennedy Space Center (KSC)

    Science.gov (United States)

    Mueller, R. P.; Sibille, L.; Leucht, K.; Smith, J. D.; Townsend, I. I.; Nick, A. J.; Schuler, J. M.

    2015-01-01

    environment and methodology, with associated laboratories that uses lean development methods and creativity-enhancing processes to invent and develop new solutions for space exploration. This paper will discuss the Swamp Works approach to developing space mining and resource extraction systems and the vision of space development it serves. The ultimate goal of the Swamp Works is to expand human civilization into the solar system via the use of local resources utilization. By mining and using the local resources in situ, it is conceivable that one day the logistics supply train from Earth can be eliminated and Earth independence of a space-based community will be enabled.

  17. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  18. A state space approach for the eigenvalue problem of marine risers

    KAUST Repository

    Alfosail, Feras

    2017-10-05

    A numerical state-space approach is proposed to examine the natural frequencies and critical buckling limits of marine risers. A large axial tension in the riser model causes numerical limitations. These limitations are overcome by using the modified Gram–Schmidt orthonormalization process as an intermediate step during the numerical integration process with the fourth-order Runge–Kutta scheme. The obtained results are validated against those obtained with other numerical methods, such as the finite-element, Galerkin, and power-series methods, and are found to be in good agreement. The state-space approach is shown to be computationally more efficient than the other methods. Also, we investigate the effect of a high applied tension, a high apparent weight, and higher-order modes on the accuracy of the numerical scheme. We demonstrate that, by applying the orthonormalization process, the stability and convergence of the approach are significantly improved.

  19. A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems

    Science.gov (United States)

    Hoag, Lucy M.

    The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The

  20. A new approach to designing reduced scale thermal-hydraulic experiments

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Sampaio, Paulo A.B. de; Pereira, Claudio M.N.A.

    2004-01-01

    Reduced scale experiments are often employed in engineering because they are much cheaper than real scale testing. Unfortunately, though, it is difficult to design a thermal-hydraulic circuit or equipment in reduced scale capable of reproducing, both accurately and simultaneously, all the physical phenomena that occur in real scale and operating conditions. This paper presents a methodology to designing thermal-hydraulic experiments in reduced scale based on setting up a constrained optimization problem that is solved using genetic algorithms (GAs). In order to demonstrate the application of the methodology proposed, we performed some investigations in the design of a heater aimed to simulate the transport of heat and momentum in the core of a pressurized water reactor (PWR) at 100% of nominal power and non-accident operating conditions. The results obtained show that the proposed methodology is a promising approach for designing reduced scale experiments

  1. Collaborative Approaches in Developing Environmental and Safety Management Systems for Commercial Space Transportation

    Science.gov (United States)

    Zee, Stacey; Murray, D.

    2009-01-01

    The Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST) licenses and permits U.S. commercial space launch and reentry activities, and licenses the operation of non-federal launch and reentry sites. ASTs mission is to ensure the protection of the public, property, and the national security and foreign policy interests of the United States during commercial space transportation activities and to encourage, facilitate, and promote U.S. commercial space transportation. AST faces unique challenges of ensuring the protection of public health and safety while facilitating and promoting U.S. commercial space transportation. AST has developed an Environmental Management System (EMS) and a Safety Management System (SMS) to help meet its mission. Although the EMS and SMS were developed independently, the systems share similar elements. Both systems follow a Plan-Do-Act-Check model in identifying potential environmental aspects or public safety hazards, assessing significance in terms of severity and likelihood of occurrence, developing approaches to reduce risk, and verifying that the risk is reduced. This paper will describe the similarities between ASTs EMS and SMS elements and how AST is building a collaborative approach in environmental and safety management to reduce impacts to the environment and risks to the public.

  2. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy

    2012-03-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth. © 2011 The Korean Statistical Society.

  3. Papaya Tree Detection with UAV Images Using a GPU-Accelerated Scale-Space Filtering Method

    Directory of Open Access Journals (Sweden)

    Hao Jiang

    2017-07-01

    Full Text Available The use of unmanned aerial vehicles (UAV can allow individual tree detection for forest inventories in a cost-effective way. The scale-space filtering (SSF algorithm is commonly used and has the capability of detecting trees of different crown sizes. In this study, we made two improvements with regard to the existing method and implementations. First, we incorporated SSF with a Lab color transformation to reduce over-detection problems associated with the original luminance image. Second, we ported four of the most time-consuming processes to the graphics processing unit (GPU to improve computational efficiency. The proposed method was implemented using PyCUDA, which enabled access to NVIDIA’s compute unified device architecture (CUDA through high-level scripting of the Python language. Our experiments were conducted using two images captured by the DJI Phantom 3 Professional and a most recent NVIDIA GPU GTX1080. The resulting accuracy was high, with an F-measure larger than 0.94. The speedup achieved by our parallel implementation was 44.77 and 28.54 for the first and second test image, respectively. For each 4000 × 3000 image, the total runtime was less than 1 s, which was sufficient for real-time performance and interactive application.

  4. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  5. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  6. Space-time trajectories of wind power generation: Parameterized precision matrices under a Gaussian copula approach

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    2015-01-01

    -correlations. Estimation is performed in a maximum likelihood framework. Based on a test case application in Denmark, with spatial dependencies over 15 areas and temporal ones for 43 hourly lead times (hence, for a dimension of n = 645), it is shown that accounting for space-time effects is crucial for generating skilful......Emphasis is placed on generating space-time trajectories of wind power generation, consisting of paths sampled from high-dimensional joint predictive densities, describing wind power generation at a number of contiguous locations and successive lead times. A modelling approach taking advantage...

  7. Innovative Approaches to Space-Based Manufacturing and Rapid Prototyping of Composite Materials

    Science.gov (United States)

    Hill, Charles S.

    2012-01-01

    The ability to deploy large habitable structures, construct, and service exploration vehicles in low earth orbit will be an enabling capability for continued human exploration of the solar system. It is evident that advanced manufacturing methods to fabricate replacement parts and re-utilize launch vehicle structural mass by converting it to different uses will be necessary to minimize costs and allow flexibility to remote crews engaged in space travel. Recent conceptual developments and the combination of inter-related approaches to low-cost manufacturing of composite materials and structures are described in context leading to the possibility of on-orbit and space-based manufacturing.

  8. Space-time uncertainty and approaches to D-brane field theory

    International Nuclear Information System (INIS)

    Yoneya, Tamiaki

    2008-01-01

    In connection with the space-time uncertainty principle which gives a simple qualitative characterization of non-local or non-commutative nature of short-distance space-time structure in string theory, the author's recent approaches toward field theories for D-branes are briefly outlined, putting emphasis on some key ideas lying in the background. The final section of the present report is devoted partially to a tribute to Yukawa on the occasion of the centennial of his birth. (author)

  9. THE PRINCIPLES AND METHODS OF INFORMATION AND EDUCATIONAL SPACE SEMANTIC STRUCTURING BASED ON ONTOLOGIC APPROACH REALIZATION

    Directory of Open Access Journals (Sweden)

    Yurij F. Telnov

    2014-01-01

    Full Text Available This article reveals principles of semantic structuring of information and educational space of objects of knowledge and scientific and educational services with use of methods of ontologic engineering. Novelty of offered approach is interface of ontology of a content and ontology of scientific and educational services that allows to carry out effective composition of services and objects of knowledge according to models of professional competences and requirements being trained. As a result of application of methods of information and educational space semantic structuring integration of use of the diverse distributed scientific and educational content by educational institutions for carrying out scientific researches, methodical development and training is provided.

  10. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  11. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    Science.gov (United States)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  12. Astronaut Ross Approaches Assembly Concept for Construction of Erectable Space Structure (ACCESS)

    Science.gov (United States)

    1999-01-01

    The crew assigned to the STS-61B mission included Bryan D. O'Conner, pilot; Brewster H. Shaw, commander; Charles D. Walker, payload specialist; mission specialists Jerry L. Ross, Mary L. Cleave, and Sherwood C. Spring; and Rodolpho Neri Vela, payload specialist. Launched aboard the Space Shuttle Atlantis November 28, 1985 at 7:29:00 pm (EST), the STS-61B mission's primary payload included three communications satellites: MORELOS-B (Mexico); AUSSAT-2 (Australia); and SATCOM KU-2 (RCA Americom). Two experiments were conducted to test assembling erectable structures in space: EASE (Experimental Assembly of Structures in Extravehicular Activity), and ACCESS (Assembly Concept for Construction of Erectable Space Structure). In a joint venture between NASA/Langley Research Center in Hampton, Virginia, and the Marshall Space Flight Center (MSFC), EASE and ACCESS were developed and demonstrated at MSFC's Neutral Buoyancy Simulator (NBS). In this STS-61B onboard photo, astronaut Ross, perched on the Manipulator Foot Restraint (MFR) approaches the erected ACCESS. The primary objective of these experiments was to test the structural assembly concepts for suitability as the framework for larger space structures and to identify ways to improve the productivity of space construction.

  13. Understanding and Mitigating Scale Formation on Membranes Used for Membrane Distillation of Wastewater During Space Travel

    Data.gov (United States)

    National Aeronautics and Space Administration — Water sustains life, and on space missions this resource is a vital commodity that must be safeguarded. For short-term missions it is most reliable and...

  14. Space and place concepts analysis based on semiology approach in residential architecture

    Directory of Open Access Journals (Sweden)

    Mojtaba Parsaee

    2015-12-01

    Full Text Available Space and place are among the fundamental concepts in architecture about which many discussions have been held and the complexity and importance of these concepts were focused on. This research has introduced an approach to better cognition of the architectural concepts based on theory and method of semiology in linguistics. Hence, at first the research investigates the concepts of space and place and explains their characteristics in architecture. Then, it reviews the semiology theory and explores its concepts and ideas. After obtaining the principles of theory and also the method of semiology, they are redefined in an architectural system based on an adaptive method. Finally, the research offers a conceptual model which is called the semiology approach by considering the architectural system as a system of signs. The approach can be used to decode the content of meanings and forms and analyses of the architectural mechanism in order to obtain its meanings and concepts. In this way and based on this approach, the residential architecture of the traditional city of Bushehr – Iran was analyzed as a case of study and its concepts were extracted. The results of this research demonstrate the effectiveness of this approach in structure detection and identification of an architectural system. Besides, this approach has the capability to be used in processes of sustainable development and also be a basis for deconstruction of architectural texts. The research methods of this study are qualitative based on comparative and descriptive analyses.

  15. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  16. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    Science.gov (United States)

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  17. Classical and statistical mechanics of celestial-scale spinning strings: Rotating space elevators

    Science.gov (United States)

    Golubović, L.; Knudsen, S.

    2009-05-01

    We introduce novel and unique class of dynamical systems, Rotating Space Elevators (RSE). The RSEs are multiply rotating systems of strings reaching into outer space. Objects sliding along RSE strings do not require internal engines or propulsion to be transported from the Earth's surface into outer space. The RSEs exhibit interesting nonlinear dynamics and statistical physics phenomena.

  18. Computer-aided detection of lung nodules via 3D fast radial transform, scale space representation, and Zernike MIP classification.

    Science.gov (United States)

    Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato

    2011-04-01

    The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.

  19. El Naschie's ε (∞) space-time and scale relativity theory in the topological dimension D = 4

    International Nuclear Information System (INIS)

    Agop, M.; Murgulet, C.

    2007-01-01

    In the topological dimension D = 4 of the scale relativity theory, the self-structuring of a coherent quantum fluid implies the Golden mean renormalization group. Then, the transfinite set of El Naschie's ε (∞) space-time becomes the background of a new physics (the transfinite physics)

  20. Approaches in the determination of plant nutrient uptake and distribution in space flight conditions

    Science.gov (United States)

    Heyenga, A. G.; Forsman, A.; Stodieck, L. S.; Hoehn, A.; Kliss, M.

    2000-01-01

    The effective growth and development of vascular plants rely on the adequate availability of water and nutrients. Inefficiency in either the initial absorption, transportation, or distribution of these elements are factors which impinge on plant structure and metabolic integrity. The potential effect of space flight and microgravity conditions on the efficiency of these processes is unclear. Limitations in the available quantity of space-grown plant material and the sensitivity of routine analytical techniques have made an evaluation of these processes impractical. However, the recent introduction of new plant cultivating methodologies supporting the application of radionuclide elements and subsequent autoradiography techniques provides a highly sensitive investigative approach amenable to space flight studies. Experiments involving the use of gel based 'nutrient packs' and the radionuclides calcium-45 and iron-59 were conducted on the Shuttle mission STS-94. Uptake rates of the radionuclides between ground and flight plant material appeared comparable.

  1. Researching on Hawking Effect in a Kerr Space Time via Open Quantum System Approach

    International Nuclear Information System (INIS)

    Liu, Wen-Biao; Liu, Xian-Ming

    2014-01-01

    It has been proposed that Hawking radiation from a Schwarzschild or a de Sitter spacetime can be understood as the manifestation of thermalization phenomena in the framework of an open quantum system. Through examining the time evolution of a detector interacting with vacuum massless scalar fields, it is found that the detector would spontaneously excite with a probability the same as the thermal radiation at Hawking temperature. Following the proposals, the Hawking effect in a Kerr space time is investigated in the framework of an open quantum systems. It is shown that Hawking effect of the Kerr space time can also be understood as the the manifestation of thermalization phenomena via open quantum system approach. Furthermore, it is found that near horizon local conformal symmetry plays the key role in the quantum effect of the Kerr space time

  2. Mentoring SFRM: A New Approach to International Space Station Flight Control Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2009-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (Operator) to a basic level of effectiveness in 1 year. SFRM training uses a twopronged approach to expediting operator certification: 1) imbed SFRM skills training into all Operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills.

  3. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  4. An integrated mission approach to the space exploration initiative will ensure success

    Science.gov (United States)

    Coomes, Edmund P.; Dagle, Jefferey E.; Bamberger, Judith A.; Noffsinger, Kent E.

    1991-01-01

    The direction of the American space program, as defined by President Bush and the National Commission on Space, is to expand human presence into the solar system. Landing an American on Mars by the 50th anniversary of the Apollo 11 lunar landing is the goal. This challenge has produced a level of excitement among young Americans not seen for nearly three decades. The exploration and settlement of the space frontier will occupy the creative thoughts and energies of generations of Americans well into the next century. The return of Americans to the moon and beyond must be viewed as a national effort with strong public support if it is to become a reality. Key to making this an actuality is the mission approach selected. Developing a permanent presence in space requires a continual stepping outward from Earch in a logical progressive manner. If we seriously plan to go and to stay, then not only must we plan what we are to do and how we are to do it, we must address the logistic support infrastructure that will allow us to stay there once we arrive. A fully integrated approach to mission planning is needed if the Space exploration Initiative (SEI) is to be successful. Only in this way can a permanent human presence in space be sustained. An integrated infrastructure approach would reduce the number of new systems and technologies requiring development. The resultant horizontal commonality of systems and hardware would reduce the direct economic impact of SEI while an early return on investment through technology spin-offs would be an economic benefit by greatly enhancing our international technical competitiveness. If the exploration, development, and colonization of space is to be affordable and acceptable, careful consideration must be given to such things as ``return on investment'' and ``commercial product potential'' of the technologies developed. This integrated approach will win the Congressional support needed to secure the financial backing necessary to assure

  5. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  6. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  7. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    Science.gov (United States)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  8. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  9. The balance space approach to multicriteria decision making—involving the decision maker

    OpenAIRE

    Ehrgott, M.

    2002-01-01

    The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prov...

  10. Approaching the new reality. [changes in NASA space programs due to US economy

    Science.gov (United States)

    Diaz, Al V.

    1993-01-01

    The focus on more frequent access to space through smaller, less costly missions, and on NASA's role as a source of technological advance within the U.S. economy is discussed. The Pluto fast flyby mission is examined as an illustration of this approach. Testbeds are to be developed to survive individual programs, becoming permanent facilities, to allow for technological upgrades on an ongoing basis.

  11. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  12. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  13. A research on the excavation, support, and environment control of large scale underground space

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Pil Chong; Kwon, Kwang Soo; Jeong, So Keul [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    With the growing necessity of the underground space due to the deficiency of above-ground space, the size and shape of underground structures tend to be complex and diverse. This complexity and variety force the development of new techniques for rock mass classification, excavation and supporting of underground space, monitoring and control of underground environment. All these techniques should be applied together to make the underground space comfortable. To achieve this, efforts have been made on 5 different areas; research on the underground space design and stability analysis, research on the techniques for excavation of rock by controlled blasting, research on the development of monitoring system to forecast the rock behaviour of underground space, research on the environment inspection system in closed space, and research on dynamic analysis of the airflow and environmental control in the large geos-spaces. The 5 main achievements are improvement of the existing structure analysis program(EXCRACK) to consider the deformation and failure characteristics of rock joints, development of new blasting design (SK-cut), prediction of ground vibration through the newly proposed wave propagation equation, development and In-Situ application of rock mass deformation monitoring system and data acquisition software, and trial manufacture of the environment inspection system in closed space. Should these techniques be applied to the development of underground space, prevention of industrial disaster, cut down of construction cost, domestication of monitoring system, improvement of tunnel stability, curtailment of royalty, upgrade of domestic technologies will be brought forth. (Abstract Truncated)

  14. A Programmatic and Engineering Approach to the Development of a Nuclear Thermal Rocket for Space Exploration

    Science.gov (United States)

    Bordelon, Wayne J., Jr.; Ballard, Rick O.; Gerrish, Harold P., Jr.

    2006-01-01

    With the announcement of the Vision for Space Exploration on January 14, 2004, there has been a renewed interest in nuclear thermal propulsion. Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions; however, the cost to develop a nuclear thermal rocket engine system is uncertain. Key to determining the engine development cost will be the engine requirements, the technology used in the development and the development approach. The engine requirements and technology selection have not been defined and are awaiting definition of the Mars architecture and vehicle definitions. The paper discusses an engine development approach in light of top-level strategic questions and considerations for nuclear thermal propulsion and provides a suggested approach based on work conducted at the NASA Marshall Space Flight Center to support planning and requirements for the Prometheus Power and Propulsion Office. This work is intended to help support the development of a comprehensive strategy for nuclear thermal propulsion, to help reduce the uncertainty in the development cost estimate, and to help assess the potential value of and need for nuclear thermal propulsion for a human Mars mission.

  15. Review of NASA approach to space radiation risk assessments for Mars exploration.

    Science.gov (United States)

    Cucinotta, Francis A

    2015-02-01

    Long duration space missions present unique radiation protection challenges due to the complexity of the space radiation environment, which includes high charge and energy particles and other highly ionizing radiation such as neutrons. Based on a recommendation by the National Council on Radiation Protection and Measurements, a 3% lifetime risk of exposure-induced death for cancer has been used as a basis for risk limitation by the National Aeronautics and Space Administration (NASA) for low-Earth orbit missions. NASA has developed a risk-based approach to radiation exposure limits that accounts for individual factors (age, gender, and smoking history) and assesses the uncertainties in risk estimates. New radiation quality factors with associated probability distribution functions to represent the quality factor's uncertainty have been developed based on track structure models and recent radiobiology data for high charge and energy particles. The current radiation dose limits are reviewed for spaceflight and the various qualitative and quantitative uncertainties that impact the risk of exposure-induced death estimates using the NASA Space Cancer Risk (NSCR) model. NSCR estimates of the number of "safe days" in deep space to be within exposure limits and risk estimates for a Mars exploration mission are described.

  16. Measures for minimizing radiation hazardous to the environment in the advent of large-scale space commercialization

    International Nuclear Information System (INIS)

    Murthy, S.N.

    1990-01-01

    The nature of hazardous effects from radio-frequency (RF), light, infrared, and nuclear radiation on human and other biological species in the advent of large-scale space commercialization is considered. Attention is focused on RF/microwave radiation from earth antennas and domestic picture phone communication links, exposure to microwave radiation from space solar-power satellites, and the continuous transmission of information from spacecraft as well as laser radiation from space. Measures for preventing and/or reducing these effects are suggested, including the use of interlocks for cutting off radiation toward ground, off-pointing microwave energy beams in cases of altitude failure, limiting the satellite off-axis gain data-rate product, the use of reflective materials on buildings and in personnel clothing to protect from space-borne lasers, and underwater colonies in cases of high-power lasers. For nuclear-power satellites, deposition in stable points in the solar system is proposed. 12 refs

  17. On the necessary conditions of the regular minimum of the scale factor of the co-moving space

    International Nuclear Information System (INIS)

    Agakov, V.G.

    1980-01-01

    In the framework of homogeneous cosmologic model studied is the behaviour of the comoving space element volume filled with barotropous medium, deprived of energy fluxes. Presented are the necessary conditions at which a regular final minimum of the scale factor of the co-mowing space may take place. It is found that to carry out the above minimum at values of cosmological constant Λ <= 0 the presence of two from three anisotropy factors is necessary. Anisotropy of space deformation should be one of these factors. In case of Λ <= 0 the regular minimum is also possible if all three factors of anisotropy are equal to zero. However if none of the factors of Fsub(i), Asub(ik) anisotropy is equal to zero, the presence of deformation space anisotropy is necessary for final regular minimum appearance

  18. Innovative Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    International Nuclear Information System (INIS)

    Hill, T.; Noble, C.; Martinell, J.; Borowski, S.

    2000-01-01

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonably assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible

  19. Innovation Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, T.; Noble, C.; Martinell, J. (INEEL); Borowski, S. (NASA Glenn Research Center)

    2000-07-14

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonably assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  20. Innovative Approaches to Development and Ground Testing of Advanced Bimodal Space Power and Propulsion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hill, Thomas Johnathan; Noble, Cheryl Ann; Noble, C.; Martinell, John Stephen; Borowski, S.

    2000-07-01

    The last major development effort for nuclear power and propulsion systems ended in 1993. Currently, there is not an initiative at either the National Aeronautical and Space Administration (NASA) or the U.S. Department of Energy (DOE) that requires the development of new nuclear power and propulsion systems. Studies continue to show nuclear technology as a strong technical candidate to lead the way toward human exploration of adjacent planets or provide power for deep space missions, particularly a 15,000 lbf bimodal nuclear system with 115 kW power capability. The development of nuclear technology for space applications would require technology development in some areas and a major flight qualification program. The last major ground test facility considered for nuclear propulsion qualification was the U.S. Air Force/DOE Space Nuclear Thermal Propulsion Project. Seven years have passed since that effort, and the questions remain the same, how to qualify nuclear power and propulsion systems for future space flight. It can be reasonable assumed that much of the nuclear testing required to qualify a nuclear system for space application will be performed at DOE facilities as demonstrated by the Nuclear Rocket Engine Reactor Experiment (NERVA) and Space Nuclear Thermal Propulsion (SNTP) programs. The nuclear infrastructure to support testing in this country is aging and getting smaller, though facilities still exist to support many of the technology development needs. By renewing efforts, an innovative approach to qualifying these systems through the use of existing facilities either in the U.S. (DOE's Advance Test Reactor, High Flux Irradiation Facility and the Contained Test Facility) or overseas should be possible.

  1. Responses of Cloud Type Distributions to the Large-Scale Dynamical Circulation: Water Budget-Related Dynamical Phase Space and Dynamical Regimes

    Science.gov (United States)

    Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.

    2015-01-01

    Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.

  2. Simulation of the space debris environment in LEO using a simplified approach

    Science.gov (United States)

    Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico

    2017-01-01

    Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC

  3. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    International Nuclear Information System (INIS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-01-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management. (paper)

  4. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  5. A multiscale analytical approach for bone remodeling simulations : linking scales from collagen to trabeculae

    NARCIS (Netherlands)

    Colloca, M.; Blanchard, R.; Hellmich, C.; Ito, K.; Rietbergen, van B.

    2014-01-01

    Bone is a dynamic and hierarchical porous material whose spatial and temporal mechanical properties can vary considerably due to differences in its microstructure and due to remodeling. Hence, a multiscale analytical approach, which combines bone structural information at multiple scales to the

  6. How efficient is sliding-scale insulin therapy? Problems with a 'cookbook' approach in hospitalized patients.

    Science.gov (United States)

    Katz, C M

    1991-04-01

    Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.

  7. Biocultural approaches to well-being and sustainability indicators across scales

    Science.gov (United States)

    Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter

    2017-01-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...

  8. A perturbative approach to the redshift space power spectrum: beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2016-08-01

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shown to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.

  9. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  10. Approaching a universal scaling relationship between fracture stiffness and fluid flow

    Science.gov (United States)

    Pyrak-Nolte, Laura J.; Nolte, David D.

    2016-02-01

    A goal of subsurface geophysical monitoring is the detection and characterization of fracture alterations that affect the hydraulic integrity of a site. Achievement of this goal requires a link between the mechanical and hydraulic properties of a fracture. Here we present a scaling relationship between fluid flow and fracture-specific stiffness that approaches universality. Fracture-specific stiffness is a mechanical property dependent on fracture geometry that can be monitored remotely using seismic techniques. A Monte Carlo numerical approach demonstrates that a scaling relationship exists between flow and stiffness for fractures with strongly correlated aperture distributions, and continues to hold for fractures deformed by applied stress and by chemical erosion as well. This new scaling relationship provides a foundation for simulating changes in fracture behaviour as a function of stress or depth in the Earth and will aid risk assessment of the hydraulic integrity of subsurface sites.

  11. A Confirmatory Factor Analysis on the Attitude Scale of Constructivist Approach for Science Teachers

    Directory of Open Access Journals (Sweden)

    E. Evrekli

    2010-11-01

    Full Text Available Underlining the importance of teachers for the constructivist approach, the present study attempts to develop “Attitude Scale of Construc¬tivist Approach for Science Teachers (ASCAST”. The pre-applications of the scale were administered to a total of 210 science teachers; however, the data obtained from 5 teachers were excluded from the analysis. As a result of the analysis of the data obtained from the pre-applications, it was found that the scale could have a single factor structure, which was tested using the confir¬matory factor analysis. As a result of the initial confirmatory factor analysis, the values of fit were examined and found to be low. Subsequently, by exam¬ining the modification indices, error covariance was added between items 23 and 24 and the model was tested once again. The added error covariance led to a significant improvement in the model, producing values of fit suitable for limit values. Thus, it was concluded that the scale could be employed with a single factor. The explained variance value for the scale developed with a sin¬gle factor structure was calculated to be 50.43% and its reliability was found to be .93. The results obtained suggest that the scale possesses reliable-valid characteristics and could be used in further studies.

  12. Tree-space statistics and approximations for large-scale analysis of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Owen, Megan; Petersen, Jens

    2013-01-01

    parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than...

  13. A feasible approach to implement a commercial scale CANDU fuel manufacturing plant in Egypt

    International Nuclear Information System (INIS)

    El-Shehawy, I.; El-Sharaky, M.; Yasso, K.; Selim, I.; Graham, N.; Newington, D.

    1995-01-01

    Many planning scenarios have been examined to assess and evaluate the economic estimates for implementing a commercial scale CANDU fuel manufacturing plant in Egypt. The cost estimates indicated strong influence of the annual capital costs on total fuel manufacturing cost; this is particularly evident in a small initial plant where the proposed design output is only sufficient to supply reload fuel for a single CANDU-6 reactor. A modular approach is investigated as a possible way, to reduce the capital costs for a small initial fuel plant. In this approach the plant would do fuel assembly operations only and the remainder of a plant would be constructed and equipped in the stages when high production volumes can justify the capital expenses. Such approach seems economically feasible for implementing a small scale CANDU fuel manufacturing plant in developing countries such as Egypt and further improvement could be achieved over the years of operation. (author)

  14. A real-space renormalization approach to the Kubo-Greenwood formula in mirror Fibonacci systems

    International Nuclear Information System (INIS)

    Sanchez, Vicenta; Wang Chumin

    2006-01-01

    An exact real-space renormalization method is developed to address the electronic transport in mirror Fibonacci chains at a macroscopic scale by means of the Kubo-Greenwood formula. The results show that the mirror symmetry induces a large number of transparent states in the dc conductivity spectra, contrary to the simple Fibonacci case. A length scaling analysis over ten orders of magnitude reveals the existence of critically localized states and their ac conduction spectra show a highly oscillating behaviour. For multidimensional quasiperiodic systems, a novel renormalization plus convolution method is proposed. This combined renormalization + convolution method has shown an extremely elevated computing efficiency, being able to calculate electrical conductance of a three-dimensional non-crystalline solid with 10 30 atoms. Finally, the dc and ac conductances of mirror Fibonacci nanowires are also investigated, where a quantized dc-conductance variation with the Fermi energy is found, as observed in gold nanowires

  15. A GOCE-only global gravity field model by the space-wise approach

    DEFF Research Database (Denmark)

    Migliaccio, Frederica; Reguzzoni, Mirko; Gatti, Andrea

    2011-01-01

    The global gravity field model computed by the spacewise approach is one of three official solutions delivered by ESA from the analysis of the GOCE data. The model consists of a set of spherical harmonic coefficients and the corresponding error covariance matrix. The main idea behind this approach...... the orbit to reduce the noise variance and correlation before gridding the data. In the first release of the space-wise approach, based on a period of about two months, some prior information coming from existing gravity field models entered into the solution especially at low degrees and low orders...... degrees; the second is an internally computed GOCE-only prior model to be used in place of the official quick-look model, thus removing the dependency on EIGEN5C especially in the polar gaps. Once the procedure to obtain a GOCE-only solution has been outlined, a new global gravity field model has been...

  16. Thyroid Cartilage Window Approach to Extract a Foreign Body after Migration into the Paraglottic Space

    Directory of Open Access Journals (Sweden)

    Sheikha Alkhudher

    2018-01-01

    Full Text Available We report a case of fish bone impaction in the paraglottic space, which caused palsy of the left vocal cord. The patient was a 45-year-old man. He presented with throat pain and hoarseness of voice for approximately one week. The diagnosis was made after careful history taking and confirmed by the use of computed tomography scan as the fish bone was not visible endoscopically under local and general anaesthesia. The patient underwent thyroid cartilage window approach, and the fish bone was retrieved. His symptoms have improved significantly, and he did not require tracheostomy. Other cases reported the removal of foreign bodies by other techniques such as laryngofissure and posterolateral approach. Our case is different in that we used a modification of thyroplasty type 1 technique as it has less reported complications than other approaches that were published in literature.

  17. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Science.gov (United States)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  18. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    Science.gov (United States)

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  19. College students with Internet addiction decrease fewer Behavior Inhibition Scale and Behavior Approach Scale when getting online.

    Science.gov (United States)

    Ko, Chih-Hung; Wang, Peng-Wei; Liu, Tai-Ling; Yen, Cheng-Fang; Chen, Cheng-Sheng; Yen, Ju-Yu

    2015-09-01

    The aim of the study is to compare the reinforcement sensitivity between online and offline interaction. The effect of gender, Internet addiction, depression, and online gaming on the difference of reinforcement sensitivity between online and offline were also evaluated. The subjects were 2,258 college students (1,066 men and 1,192 women). They completed the Behavior Inhibition Scale and Behavior Approach Scale (BIS/BAS) according to their experience online or offline. Internet addiction, depression, and Internet activity type were evaluated simultaneously. The results showed that reinforcement sensitivity was lower when interacting online than when interacting offline. College students with Internet addiction decrease fewer score on BIS and BAS after getting online than did others. The higher reward and aversion sensitivity are associated with the risk of Internet addiction. The fun seeking online might contribute to the maintenance of Internet addiction. This suggests that reinforcement sensitivity would change after getting online and would contribute to the risk and maintenance of Internet addiction. © 2014 Wiley Publishing Asia Pty Ltd.

  20. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  1. A study of coronary artery rotational motion with dense scale-space optical flow in intravascular ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Danilouchkine, M G; Mastik, F; Steen, A F W van der [Department of Biomedical Engineering, Erasmus Medical Center, Ee2302, PO Box 2040, 3000 CA, Rotterdam (Netherlands)], E-mail: m.danilouchkine@ErasmusMC.nl, E-mail: f.mastik@ErasmusMC.nl, E-mail: a.vandersteen@ErasmusMC.nl

    2009-03-21

    This paper describes a novel method for estimating tissue motion in two-dimensional intravascular ultrasound (IVUS) images of a coronary artery. It is based on the classical Lukas-Kanade (LK) algorithm for optical flow (OF). The OF vector field quantifies the amount of misalignment between two consecutive frames in a sequence of images. From the theoretical standpoint, two fundamental improvements are proposed in this paper. First, using a simplified representation of the vessel wall as a medium with randomly distributed scatterers, it was shown that the OF equation satisfies the integral brightness conservation law. Second, a scale-space embedding for the OF equation was derived under the assumption of spatial consistency in IVUS acquisitions. The spatial coherence is equivalent to a locally affine motion model. The latter effectively captures and appropriately describes a complex deformation pattern of the coronary vessel wall under the varying physiological conditions (i.e. pulsatile blood pressure). The accuracy of OF tracking was estimated on the tissue-mimicking phantoms subjected to the controlled amount of angular deviation. Moreover, the performance of the classical LK and proposed approach was compared using the simulated IVUS images with an atherosclerotic lesion. The experimental results showed robust and reliable performance of up to 5{sup 0} of rotation, which is within the plausible range of circumferential displacement of the coronary arteries. Subsequently, the algorithm was used to analyze vessel wall motion in 18 IVUS pullbacks from 16 patients. The in vivo experiments revealed that the motion of coronary arteries is primarily determined by the cardiac contraction.

  2. A multi-scale approach of fluvial biogeomorphic dynamics using photogrammetry.

    Science.gov (United States)

    Hortobágyi, Borbála; Corenblit, Dov; Vautier, Franck; Steiger, Johannes; Roussel, Erwan; Burkart, Andreas; Peiry, Jean-Luc

    2017-11-01

    Over the last twenty years, significant technical advances turned photogrammetry into a relevant tool for the integrated analysis of biogeomorphic cross-scale interactions within vegetated fluvial corridors, which will largely contribute to the development and improvement of self-sustainable river restoration efforts. Here, we propose a cost-effective, easily reproducible approach based on stereophotogrammetry and Structure from Motion (SfM) technique to study feedbacks between fluvial geomorphology and riparian vegetation at different nested spatiotemporal scales. We combined different photogrammetric methods and thus were able to investigate biogeomorphic feedbacks at all three spatial scales (i.e., corridor, alluvial bar and micro-site) and at three different temporal scales, i.e., present, recent past and long term evolution on a diversified riparian landscape mosaic. We evaluate the performance and the limits of photogrammetric methods by targeting a set of fundamental parameters necessary to study biogeomorphic feedbacks at each of the three nested spatial scales and, when possible, propose appropriate solutions. The RMSE varies between 0.01 and 2 m depending on spatial scale and photogrammetric methods. Despite some remaining difficulties to properly apply them with current technologies under all circumstances in fluvial biogeomorphic studies, e.g. the detection of vegetation density or landform topography under a dense vegetation canopy, we suggest that photogrammetry is a promising instrument for the quantification of biogeomorphic feedbacks at nested spatial scales within river systems and for developing appropriate river management tools and strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  4. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  5. Tools in the orbit space approach to the study of invariant functions: rational parametrization of strata

    International Nuclear Information System (INIS)

    Sartori, G; Valente, G

    2003-01-01

    Functions which are equivariant or invariant under the transformations of a compact linear group G acting in a Euclidean space R n , can profitably be studied as functions defined in the orbit space of the group. The orbit space is the union of a finite set of strata, which are semialgebraic manifolds formed by the G-orbits with the same orbit-type. In this paper, we provide a simple recipe to obtain rational parametrizations of the strata. Our results can be easily exploited, in many physical contexts where the study of equivariant or invariant functions is important, for instance in the determination of patterns of spontaneous symmetry breaking, in the analysis of phase spaces and structural phase transitions (Landau theory), in equivariant bifurcation theory, in crystal field theory and in most areas where use is made of symmetry-adapted functions. A physically significant example of utilization of the recipe is given, related to spontaneous polarization in chiral biaxial liquid crystals, where the advantages with respect to previous heuristic approaches are shown

  6. Tools in the orbit space approach to the study of invariant functions: rational parametrization of strata

    Energy Technology Data Exchange (ETDEWEB)

    Sartori, G; Valente, G [Dipartimento di Fisica, Universita di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy)

    2003-02-21

    Functions which are equivariant or invariant under the transformations of a compact linear group G acting in a Euclidean space R{sup n}, can profitably be studied as functions defined in the orbit space of the group. The orbit space is the union of a finite set of strata, which are semialgebraic manifolds formed by the G-orbits with the same orbit-type. In this paper, we provide a simple recipe to obtain rational parametrizations of the strata. Our results can be easily exploited, in many physical contexts where the study of equivariant or invariant functions is important, for instance in the determination of patterns of spontaneous symmetry breaking, in the analysis of phase spaces and structural phase transitions (Landau theory), in equivariant bifurcation theory, in crystal field theory and in most areas where use is made of symmetry-adapted functions. A physically significant example of utilization of the recipe is given, related to spontaneous polarization in chiral biaxial liquid crystals, where the advantages with respect to previous heuristic approaches are shown.

  7. Space-Hotel Early Bird - An Educational and Public Outreach Approach

    Science.gov (United States)

    Amekrane, R.; Holze, C.

    2002-01-01

    education and public outreach can be combined and how a cooperation among an association, the industry and academia can work successfully. Representatives of the DGLR and the academia developed a method to spread space related knowledge in a short time to a motivated working group. The project was a great success in the sense to involve other disciplines in space related topics by interdisciplinary work and in the sense of public and educational outreach. With more than 2.3 million contacts the DGLR e.V. promoted space and the vision of living (in) space to the public. The task of the paper is mainly to describe the approach and the experience made related to the organization, lectures, financing and outreach efforts in respect to similar future international outreach activities, which are planned for the 54th International Astronautical Congress in Bremen/Germany. www.spacehotel.org

  8. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    Science.gov (United States)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  9. Wavelet Space-Scale-Decomposition Analysis of QSO's Ly$\\alpha$ Absorption Lines: Spectrum of Density Perturbations

    OpenAIRE

    Pando, Jesus; Fang, Li-Zhi

    1995-01-01

    A method for measuring the spectrum of a density field by a discrete wavelet space-scale decomposition (SSD) has been studied. We show how the power spectrum can effectively be described by the father function coefficients (FFC) of the wavelet SSD. We demonstrate that the features of the spectrum, such as the magnitude, the index of a power law, and the typical scales, can be determined with high precision by the FFC reconstructed spectrum. This method does not require the mean density, which...

  10. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  11. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  12. A new approach for modeling and analysis of molten salt reactors using SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

    2013-07-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  13. A new approach for modeling and analysis of molten salt reactors using SCALE

    International Nuclear Information System (INIS)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C.

    2013-01-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  14. Robust control of uncertain dynamic systems a linear state space approach

    CERN Document Server

    Yedavalli, Rama K

    2014-01-01

    This textbook aims to provide a clear understanding of the various tools of analysis and design for robust stability and performance of uncertain dynamic systems. In model-based control design and analysis, mathematical models can never completely represent the “real world” system that is being modeled, and thus it is imperative to incorporate and accommodate a level of uncertainty into the models. This book directly addresses these issues from a deterministic uncertainty viewpoint and focuses on the interval parameter characterization of uncertain systems. Various tools of analysis and design are presented in a consolidated manner. This volume fills a current gap in published works by explicitly addressing the subject of control of dynamic systems from linear state space framework, namely using a time-domain, matrix-theory based approach. This book also: Presents and formulates the robustness problem in a linear state space model framework Illustrates various systems level methodologies with examples and...

  15. A New Approach to Space Situational Awareness using Small Ground-Based Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Cliff S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    This report discusses a new SSA approach evaluated by Pacific Northwest National Laboratory (PNNL) that may lead to highly scalable, small telescope observing stations designed to help manage the growing space surveillance burden. Using the methods and observing tools described in this report, the team was able to acquire and track very faint satellites (near Pluto’s apparent brightness). Photometric data was collected and used to correlate object orbital position as a function of atomic clock-derived time. Object apparent brightness was estimated by image analysis and nearby star calibration. The measurement performance was only limited by weather conditions, object brightness, and the sky glow at the observation site. In the future, these new SSA technologies and techniques may be utilized to protect satellite assets, detect and monitor orbiting debris fields, and support Outer Space Treaty monitoring and transparency.

  16. Semiclassical moment of inertia shell-structure within the phase-space approach

    International Nuclear Information System (INIS)

    Gorpinchenko, D V; Magner, A G; Bartel, J; Blocki, J P

    2015-01-01

    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the cranking model and the Strutinsky shell-correction method by using the non-perturbative periodic-orbit theory in the phase-space variables. This moment of inertia for adiabatic (statistical-equilibrium) rotations can be approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. A semiclassical phase-space trace formula allows us to express the shell components of the moment of inertia quite accurately in terms of the free-energy shell corrections for integrable and partially chaotic Fermi systems, which is in good agreement with the corresponding quantum calculations. (paper)

  17. A semiclassical approach to many-body interference in Fock-space

    Energy Technology Data Exchange (ETDEWEB)

    Engl, Thomas

    2015-11-01

    Many-body systems draw ever more physicists' attention. Such an increase of interest often comes along with the development of new theoretical methods. In this thesis, a non-perturbative semiclassical approach is developed, which allows to analytically study many-body interference effects both in bosonic and fermionic Fock space and is expected to be applicable to many research areas in physics ranging from Quantum Optics and Ultracold Atoms to Solid State Theory and maybe even High Energy Physics. After the derivation of the semiclassical approximation, which is valid in the limit of large total number of particles, first applications manifesting the presence of many-body interference effects are shown. Some of them are confirmed numerically thus verifying the semiclassical predictions. Among these results are coherent back-/forward-scattering in bosonic and fermionic Fock space as well as a many-body spin echo, to name only the two most important ones.

  18. Scale-Dependence of Processes Structuring Dung Beetle Metacommunities Using Functional Diversity and Community Deconstruction Approaches

    Science.gov (United States)

    da Silva, Pedro Giovâni; Hernández, Malva Isabel Medina

    2015-01-01

    Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution

  19. Parameter retrieval of chiral metamaterials based on the state-space approach.

    Science.gov (United States)

    Zarifi, Davoud; Soleimani, Mohammad; Abdolali, Ali

    2013-08-01

    This paper deals with the introduction of an approach for the electromagnetic characterization of homogeneous chiral layers. The proposed method is based on the state-space approach and properties of a 4×4 state transition matrix. Based on this, first, the forward problem analysis through the state-space method is reviewed and properties of the state transition matrix of a chiral layer are presented and proved as two theorems. The formulation of a proposed electromagnetic characterization method is then presented. In this method, scattering data for a linearly polarized plane wave incident normally on a homogeneous chiral slab are combined with properties of a state transition matrix and provide a powerful characterization method. The main difference with respect to other well-established retrieval procedures based on the use of the scattering parameters relies on the direct computation of the transfer matrix of the slab as opposed to the conventional calculation of the propagation constant and impedance of the modes supported by the medium. The proposed approach allows avoiding nonlinearity of the problem but requires getting enough equations to fulfill the task which was provided by considering some properties of the state transition matrix. To demonstrate the applicability and validity of the method, the constitutive parameters of two well-known dispersive chiral metamaterial structures at microwave frequencies are retrieved. The results show that the proposed method is robust and reliable.

  20. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  1. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  2. Micro-Scale Gallium Nitride Pressure Sensors for Advanced Harsh Environment Space Technology

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this research is to study the high-temperature response of the 2-dimesional electron gas (2DEG) that occurs at the interface of aluminum gallium nitride...

  3. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    Science.gov (United States)

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  4. Time-dependent approach to collisional ionization using exterior complex scaling

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2002-01-01

    We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions

  5. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  6. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  7. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    Science.gov (United States)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  8. A potential theory approach to an algorithm of conceptual space partitioning

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2017-12-01

    Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

  9. Neutrino masses, scale-dependent growth, and redshift-space distortions

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, Oscar F., E-mail: oscarh@physics.mcgill.ca [Marianopolis College, 4873 Westmount Ave., Westmount, QC H3Y 1X9 (Canada)

    2017-06-01

    Massive neutrinos leave a unique signature in the large scale clustering of matter. We investigate the wavenumber dependence of the growth factor arising from neutrino masses and use a Fisher analysis to determine the aspects of a galaxy survey needed to measure this scale dependence.

  10. Simulating space-time uncertainty in continental-scale gridded precipitation fields for agrometeorological modelling

    NARCIS (Netherlands)

    Wit, de A.J.W.; Bruin, de S.

    2006-01-01

    Previous analyses of the effects of uncertainty in precipitation fields on the output of EU Crop Growth Monitoring System (CGMS) demonstrated that the influence on simulated crop yield was limited at national scale, but considerable at local and regional scales. We aim to propagate uncertainty due

  11. Playing the Scales: Regional Transformations and the Differentiation of Rural Space in the Chilean Wine Industry

    Science.gov (United States)

    Overton, John; Murray, Warwick E.

    2011-01-01

    Globalization and industrial restructuring transform rural places in complex and often contradictory ways. These involve both quantitative changes, increasing the size and scope of operation to achieve economies of scale, and qualitative shifts, sometimes leading to a shift up the quality/price scale, towards finer spatial resolution and…

  12. Global forward-predicting dynamic routing for traffic concurrency space stereo multi-layer scale-free network

    International Nuclear Information System (INIS)

    Xie Wei-Hao; Zhou Bin; Liu En-Xiao; Lu Wei-Dang; Zhou Ting

    2015-01-01

    Many real communication networks, such as oceanic monitoring network and land environment observation network, can be described as space stereo multi-layer structure, and the traffic in these networks is concurrent. Understanding how traffic dynamics depend on these real communication networks and finding an effective routing strategy that can fit the circumstance of traffic concurrency and enhance the network performance are necessary. In this light, we propose a traffic model for space stereo multi-layer complex network and introduce two kinds of global forward-predicting dynamic routing strategies, global forward-predicting hybrid minimum queue (HMQ) routing strategy and global forward-predicting hybrid minimum degree and queue (HMDQ) routing strategy, for traffic concurrency space stereo multi-layer scale-free networks. By applying forward-predicting strategy, the proposed routing strategies achieve better performances in traffic concurrency space stereo multi-layer scale-free networks. Compared with the efficient routing strategy and global dynamic routing strategy, HMDQ and HMQ routing strategies can optimize the traffic distribution, alleviate the number of congested packets effectively and reach much higher network capacity. (paper)

  13. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  14. Orotracheal Intubation Using the Retromolar Space: A Reliable Alternative Intubation Approach to Prevent Dental Injury

    Directory of Open Access Journals (Sweden)

    Linh T. Nguyen

    2016-01-01

    Full Text Available Despite recent advances in airway management, perianesthetic dental injury remains one of the most common anesthesia-related adverse events and cause for malpractice litigation against anesthesia providers. Recommended precautions for prevention of dental damage may not always be effective because these techniques involve contact and pressure exerted on vulnerable teeth. We describe a novel approach using the retromolar space to insert a flexible fiberscope for tracheal tube placement as a reliable method to achieve atraumatic tracheal intubation. Written consent for publication has been obtained from the patient.

  15. A study of space shuttle energy management, approach and landing analysis

    Science.gov (United States)

    Morth, R.

    1973-01-01

    The steering system of the space shuttle vehicle is presented for the several hundred miles of flight preceding landing. The guidance scheme is characterized by a spiral turn to dissipate excess potential energy (altitude) prior to a standard straight-in final approach. In addition, the system features pilot oriented control, drag brakes, phugoid damping, and a navigational capacity founded upon an inertial measurement unit and an on-board computer. Analytic formulas are used to calculate, represent, and insure the workability of the system's specifications

  16. Polymer density functional theory approach based on scaling second-order direct correlation function.

    Science.gov (United States)

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  17. Environmental Remediation Full-Scale Implementation: Back to Simple Microbial Massive Culture Approaches

    Directory of Open Access Journals (Sweden)

    Agung Syakti

    2010-10-01

    Full Text Available Using bioaugmentation and biostimulation approach for contaminated soil bioremediation were investigated and implemented on field scale. We combine those approaches by culturing massively the petrophilic indigenous microorganisms from chronically contaminated soil enriched by mixed manure. Through these methods, bioremediation performance revealed promising results in removing the petroleum hydrocarbons comparatively using metabolite by product such as biosurfactant, specific enzymes and other extra-cellular product which are considered as a difficult task and will impact on cost increase.

  18. A new approach to reduce uncertainties in space radiation cancer risk predictions.

    Directory of Open Access Journals (Sweden)

    Francis A Cucinotta

    Full Text Available The prediction of space radiation induced cancer risk carries large uncertainties with two of the largest uncertainties being radiation quality and dose-rate effects. In risk models the ratio of the quality factor (QF to the dose and dose-rate reduction effectiveness factor (DDREF parameter is used to scale organ doses for cosmic ray proton and high charge and energy (HZE particles to a hazard rate for γ-rays derived from human epidemiology data. In previous work, particle track structure concepts were used to formulate a space radiation QF function that is dependent on particle charge number Z, and kinetic energy per atomic mass unit, E. QF uncertainties where represented by subjective probability distribution functions (PDF for the three QF parameters that described its maximum value and shape parameters for Z and E dependences. Here I report on an analysis of a maximum QF parameter and its uncertainty using mouse tumor induction data. Because experimental data for risks at low doses of γ-rays are highly uncertain which impacts estimates of maximum values of relative biological effectiveness (RBEmax, I developed an alternate QF model, denoted QFγAcute where QFs are defined relative to higher acute γ-ray doses (0.5 to 3 Gy. The alternate model reduces the dependence of risk projections on the DDREF, however a DDREF is still needed for risk estimates for high-energy protons and other primary or secondary sparsely ionizing space radiation components. Risk projections (upper confidence levels (CL for space missions show a reduction of about 40% (CL∼50% using the QFγAcute model compared the QFs based on RBEmax and about 25% (CL∼35% compared to previous estimates. In addition, I discuss how a possible qualitative difference leading to increased tumor lethality for HZE particles compared to low LET radiation and background tumors remains a large uncertainty in risk estimates.

  19. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  20. BRST quantization of Yang-Mills theory: A purely Hamiltonian approach on Fock space

    Science.gov (United States)

    Öttinger, Hans Christian

    2018-04-01

    We develop the basic ideas and equations for the BRST quantization of Yang-Mills theories in an explicit Hamiltonian approach, without any reference to the Lagrangian approach at any stage of the development. We present a new representation of ghost fields that combines desirable self-adjointness properties with canonical anticommutation relations for ghost creation and annihilation operators, thus enabling us to characterize the physical states on a well-defined Fock space. The Hamiltonian is constructed by piecing together simple BRST invariant operators to obtain a minimal invariant extension of the free theory. It is verified that the evolution equations implied by the resulting minimal Hamiltonian provide a quantum version of the classical Yang-Mills equations. The modifications and requirements for the inclusion of matter are discussed in detail.

  1. Researcher’s Academic Culture in the Educational Space of the University: Linguo-Axiological Approach

    Directory of Open Access Journals (Sweden)

    Olena Semenog

    2017-06-01

    Full Text Available The article is devoted to the nature of the concepts “classic University”, “cultural and educational space of the University”, “research activity of future professional”, “researcher’s academic culture” and approach to academic culture as the basis of research culture in a university. It is defined that the concept of academic culture is complex. We are talking in general about the culture at the university, values, traditions, norms, rules of scientific research, and the scientific language culture, the culture of spirituality and morality, the culture of communication between science tutors and students, a culture of unique pedagogical action of master and his social, moral responsibility for the studying results. The formation of academic culture and own style, is better to develop on the positions of personal-activity, competence, axiological, cultural, acmeological approaches.

  2. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    Science.gov (United States)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space

  3. The management approach to the NASA space station definition studies at the Manned Spacecraft Center

    Science.gov (United States)

    Heberlig, J. C.

    1972-01-01

    The overall management approach to the NASA Phase B definition studies for space stations, which were initiated in September 1969 and completed in July 1972, is reviewed with particular emphasis placed on the management approach used by the Manned Spacecraft Center. The internal working organizations of the Manned Spacecraft Center and its prime contractor, North American Rockwell, are delineated along with the interfacing techniques used for the joint Government and industry study. Working interfaces with other NASA centers, industry, and Government agencies are briefly highlighted. The controlling documentation for the study (such as guidelines and constraints, bibliography, and key personnel) is reviewed. The historical background and content of the experiment program prepared for use in this Phase B study are outlined and management concepts that may be considered for future programs are proposed.

  4. Approaches to 30 Percent Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Beal, D. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Martin, E. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States)

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the Building America program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  5. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Directory of Open Access Journals (Sweden)

    Nguyen Trung Kien

    2017-01-01

    Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  6. “HABITAT MAPPING” GEODATABASE, AN INTEGRATED INTERDISCIPLINARY AND MULTI-SCALE APPROACH FOR DATA MANAGEMENT

    OpenAIRE

    Grande, Valentina; Angeletti, Lorenzo; Campiani, Elisabetta; Conese, Ilaria; Foglini, Federica; Leidi, Elisa; Mercorella, Alessandra; Taviani, Marco

    2016-01-01

    Abstract Historically, a number of different key concepts and methods dealing with marine habitat classifications and mapping have been developed to date. The EU CoCoNET project provides a new attempt in establishing an integrated approach on the definition of habitats. This scheme combines multi-scale geological and biological data, in fact it consists of three levels (Geomorphological level, Substrate level and Biological level) which in turn are divided into several h...

  7. A multiple-time-scale approach to the control of ITBs on JET

    Energy Technology Data Exchange (ETDEWEB)

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  8. A multiple-time-scale approach to the control of ITBs on JET

    International Nuclear Information System (INIS)

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  9. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    Science.gov (United States)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  10. [Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].

    Science.gov (United States)

    Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin

    2017-07-01

    In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.

  11. Third International Scientific and Practical Conference «Space Travel is Approaching Reality» (Successful Event in Difficult Times

    Directory of Open Access Journals (Sweden)

    Matusevych Tetiana

    2015-02-01

    Full Text Available The article analyzes the presentations of participants of III International Scientific and Practical Conference «Space Travel – approaching reality», held on 6–7 November 2014 in Kharkiv, Ukraine

  12. COGNITIVE APPROACH TO THE STEREOTYPICAL PLACEMENT OF WOMEN IN VISUAL ADVERTISING SPACE

    Directory of Open Access Journals (Sweden)

    Simona Amankevičiūtė

    2013-10-01

    Full Text Available This article conceptualizes the image of women in the sexist advertisements of the 1950s and 60s and in current advertising discourse by combining the research traditions of both cognitive linguistics and semiotic image analysis. The aim of the research is to try to evaluate how canonical positionings of women in the hyperreality of advertisements may slip into everyday discourse (stereotype space and to present an interpretation of the creators’ visual lexicon. It is presumed that the traditional (formed by feminist linguists approach to sexist advertising as an expression of an androcentric worldview in culture may be considered too subjectively critical. This study complements an interpretation of women’s social roles in advertising with cognitive linguistic insights on the subject’s (woman’s visualisation and positioning in ad space. The article briefly overviews the feminist approach to women’s place in public discourse, and discusses the relevance of Goffman’s Gender Studies to an investigation of women’s images in advertising. The scholar’s contribution to adapting cognitive frame theory for an investigation of visuals in advertising is also discussed. The analysed ads were divided into three groups by Goffman’s classification, according to the concrete visuals used to represent women’s bodies or parts thereof: dismemberment, commodification, and subordination ritual. The classified stereotypical images of women’s bodies are discussed as visual metonymy, visual metaphor, and image schemas.

  13. Contaminant ingress into multizone buildings: An analytical state-space approach

    KAUST Repository

    Parker, Simon

    2013-08-13

    The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  14. Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach

    International Nuclear Information System (INIS)

    Pin, F.G.; Tulloch, F.A.

    1996-01-01

    A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach

  15. Mentoring SFRM: A New Approach to International Space Station Flight Controller Training

    Science.gov (United States)

    Huning, Therese; Barshi, Immanuel; Schmidt, Lacey

    2008-01-01

    The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (operator) to a basic level of effectiveness in 1 year. SFRM training uses a two-pronged approach to expediting operator certification: 1) imbed SFRM skills training into all operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills. Methods: A mentor works with an operator throughout the training flow. Inserted into the training flow are guided-discussion sessions and on-the-job observation opportunities focusing on specific SFRM skills, including: situational leadership, conflict management, stress management, cross-cultural awareness, self care and team care while on-console, communication, workload management, and situation awareness. The mentor and operator discuss the science and art behind the skills, cultural effects on skills applications, recognition of good and bad skills applications, recognition of how skills application changes subtly in different situations, and individual goals and techniques for improving skills. Discussion: This mentoring program provides an additional means of transferring SFRM knowledge compared to traditional CRM training programs. Our future endeavors in training SFRM skills (as well as other organization s) may benefit from adding team performance skills mentoring. This paper

  16. Role of jet spacing and strut geometry on the formation of large scale structures and mixing characteristics

    Science.gov (United States)

    Soni, Rahul Kumar; De, Ashoke

    2018-05-01

    The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.

  17. Small-Scale Design Experiments as Working Space for Larger Mobile Communication Challenges

    Science.gov (United States)

    Lowe, Sarah; Stuedahl, Dagny

    2014-01-01

    In this paper, a design experiment using Instagram as a cultural probe is submitted as a method for analyzing the challenges that arise when considering the implementation of social media within a distributed communication space. It outlines how small, iterative investigations can reveal deeper research questions relevant to the education of…

  18. Task-space separation principle: a force-field approach to motion planning for redundant manipulators.

    Science.gov (United States)

    Tommasino, Paolo; Campolo, Domenico

    2017-02-03

    In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.

  19. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  20. Approaches to 30% Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S.; Beal, D.; Martin, E.; Fonorow, K.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the BA Program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. The scope of this report is to demonstrate achievement of these goals though the documentation of production-scale homes built cost-effectively at the community scale, and modeled to reduce whole-house energy use by 30% in the Hot Humid climate region. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  1. Linking biogeomorphic feedbacks from ecosystem engineer to landscape scale: a panarchy approach

    Science.gov (United States)

    Eichel, Jana

    2017-04-01

    Scale is a fundamental concept in both ecology and geomorphology. Therefore, scale-based approaches are a valuable tool to bridge the disciplines and improve the understanding of feedbacks between geomorphic processes, landforms, material and organisms and ecological processes in biogeomorphology. Yet, linkages between biogeomorphic feedbacks on different scales, e.g. between ecosystem engineering and landscape scale patterns and dynamics, are not well understood. A panarchy approach sensu Holling et al. (2002) can help to close this research gap and explain how structure and function are created in biogeomorphic ecosystems. Based on results from previous biogeomorphic research in Turtmann glacier foreland (Switzerland; Eichel, 2017; Eichel et al. 2013, 2016), a panarchy concept is presented for lateral moraine slope biogeomorphic ecosystems. It depicts biogeomorphic feedbacks on different spatiotemporal scales as a set of nested adaptive cycles and links them by 'remember' and 'revolt' connections. On a small scale (cm2 - m2; seconds to years), the life cycle of the ecosystem engineer Dryas octopetala L. is considered as an adaptive cycle. Biogeomorphic succession within patches created by geomorphic processes represents an intermediate scale adaptive cycle (m2 - ha, years to decades), while geomorphic and ecologic pattern development at a landscape scale (ha - km2, decades to centuries) can be illustrated by an adaptive cycle of ‚biogeomorphic patch dynamics' (Eichel, 2017). In the panarchy, revolt connections link the smaller scale adaptive cycles to larger scale cycles: on lateral moraine slopes, the development of ecosystem engineer biomass and cover controls the engineering threshold of the biogeomorphic feedback window (Eichel et al., 2016) and therefore the onset of the biogeomorphic phase during biogeomorphic succession. In this phase, engineer patches and biogeomorphic structures can be created in the patch mosaic of the landscape. Remember connections

  2. Digital Cellular Solid Pressure Vessels: A Novel Approach for Human Habitation in Space

    Science.gov (United States)

    Cellucci, Daniel; Jenett, Benjamin; Cheung, Kenneth C.

    2017-01-01

    It is widely assumed that human exploration beyond Earth's orbit will require vehicles capable of providing long duration habitats that simulate an Earth-like environment - consistent artificial gravity, breathable atmosphere, and sufficient living space- while requiring the minimum possible launch mass. This paper examines how the qualities of digital cellular solids - high-performance, repairability, reconfigurability, tunable mechanical response - allow the accomplishment of long-duration habitat objectives at a fraction of the mass required for traditional structural technologies. To illustrate the impact digital cellular solids could make as a replacement to conventional habitat subsystems, we compare recent proposed deep space habitat structural systems with a digital cellular solids pressure vessel design that consists of a carbon fiber reinforced polymer (CFRP) digital cellular solid cylindrical framework that is lined with an ultra-high molecular weight polyethylene (UHMWPE) skin. We use the analytical treatment of a linear specific modulus scaling cellular solid to find the minimum mass pressure vessel for a structure and find that, for equivalent habitable volume and appropriate safety factors, the use of digital cellular solids provides clear methods for producing structures that are not only repairable and reconfigurable, but also higher performance than their conventionally manufactured counterparts.

  3. Trajectory approach to dissipative quantum phase space dynamics: Application to barrier scattering

    International Nuclear Information System (INIS)

    Hughes, Keith H.; Wyatt, Robert E.

    2004-01-01

    The Caldeira-Leggett master equation, expressed in Lindblad form, has been used in the numerical study of the effect of a thermal environment on the dynamics of the scattering of a wave packet from a repulsive Eckart barrier. The dynamics are studied in terms of phase space trajectories associated with the distribution function, W(q,p,t). The equations of motion for the trajectories include quantum terms that introduce nonlocality into the motion, which imply that an ensemble of correlated trajectories needs to be propagated. However, use of the derivative propagation method (DPM) allows each trajectory to be propagated individually. This is achieved by deriving equations of motion for the partial derivatives of W(q,p,t) that appear in the master equation. The effects of dissipation on the trajectories are studied and results are shown for the transmission probability. On short time scales, decoherence is demonstrated by a swelling of trajectories into momentum space. For a nondissipative system, a comparison is made of the DPM with the 'exact' transmission probability calculated from a fixed grid calculation

  4. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason; McCabe, Matthew; Sharma, Ashish

    2015-01-01

    precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985

  5. Nonlocal multi-scale traffic flow models: analysis beyond vector spaces

    Directory of Open Access Journals (Sweden)

    Peter E. Kloeden

    2016-08-01

    Full Text Available Abstract Realistic models of traffic flow are nonlinear and involve nonlocal effects in balance laws. Flow characteristics of different types of vehicles, such as cars and trucks, need to be described differently. Two alternatives are used here, $$L^p$$ L p -valued Lebesgue measurable density functions and signed Radon measures. The resulting solution spaces are metric spaces that do not have a linear structure, so the usual convenient methods of functional analysis are no longer applicable. Instead ideas from mutational analysis will be used, in particular the method of Euler compactness will be applied to establish the well-posedness of the nonlocal balance laws. This involves the concatenation of solutions of piecewise linear systems on successive time subintervals obtained by freezing the nonlinear nonlocal coefficients to their values at the start of each subinterval. Various compactness criteria lead to a convergent subsequence. Careful estimates of the linear systems are needed to implement this program.

  6. Path integral approach for superintegrable potentials on spaces of non-constant curvature. Pt. 2. Darboux spaces D{sub III} and D{sub IV}

    Energy Technology Data Exchange (ETDEWEB)

    Grosche, C. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Pogosyan, G.S. [Joint Inst. of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics]|[Guadalajara Univ., Jalisco (Mexico). Dept. de Matematicas CUCEI; Sissakian, A.N. [Joint Inst. of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2006-08-15

    This is the second paper on the path integral approach of superintegrable systems on Darboux spaces, spaces of non-constant curvature. We analyze in the spaces D{sub III} and D{sub IV} five respectively four superintegrable potentials, which were first given by Kalnins et al. We are able to evaluate the path integral in most of the separating coordinate systems, leading to expressions for the Green functions, the discrete and continuous wave-functions, and the discrete energy-spectra. In some cases, however, the discrete spectrum cannot be stated explicitly, because it is determined by a higher order polynomial equation. We show that also the free motion in Darboux space of type III can contain bound states, provided the boundary conditions are appropriate. We state the energy spectrum and the wave-functions, respectively. (orig.)

  7. Perceptual scale expansion: an efficient angular coding strategy for locomotor space.

    Science.gov (United States)

    Durgin, Frank H; Li, Zhi

    2011-08-01

    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.

  8. Expanding the scale of forest management: allocating timber harvests in time and space

    Science.gov (United States)

    Eric J. Gustafson

    1996-01-01

    This study examined the effect of clustering timber harvest zones and of changing the land use categories of zones (dynamic zoning) over varying temporal and spatial scales. Focusing on the Hoosier National Forest (HNF) in Indiana, USA as a study area, I used a timber harvest allocation model to simulate four management alternatives. In the static zoning alternative,...

  9. Working Bibliography on Scaling Methods Appropriate for Analysis of Space Preferences. Exchange Bibliography No. 514.

    Science.gov (United States)

    Ewing, Gordon O.; And Others

    This bibliography draws together from a number of disciplines literature dealing with problems of scaling and measuring stimuli. Substantive areas of application are behavioral geography, marketing, mathematical psychology, urban planning, consumer research, and subjective appraisals of objects. Citations are organized into four separate sections.…

  10. Exploring Children's Face-Space: A Multidimensional Scaling Analysis of the Mental Representation of Facial Identity

    Science.gov (United States)

    Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing

    2009-01-01

    We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces…

  11. Disordering scaling and generalized nearest-neighbor approach in the thermodynamics of Lennard-Jones systems

    International Nuclear Information System (INIS)

    Vorob'ev, V.S.

    2003-01-01

    We suggest a concept of multiple disordering scaling of the crystalline state. Such a scaling procedure applied to a crystal leads to the liquid and (in low density limit) gas states. This approach provides an explanation to a high value of configuration (common) entropy of liquefied noble gases, which can be deduced from experimental data. We use the generalized nearest-neighbor approach to calculate free energy and pressure of the Lennard-Jones systems after performing this scaling procedure. These thermodynamic functions depend on one parameter characterizing the disordering only. Condensed states of the system (liquid and solid) correspond to small values of this parameter. When this parameter tends to unity, we get an asymptotically exact equation of state for a gas involving the second virial coefficient. A reasonable choice of the values for the disordering parameter (ranging between zero and unity) allows us to find the lines of coexistence between different phase states in the Lennard-Jones systems, which are in a good agreement with the available experimental data

  12. The Stokes number approach to support scale-up and technology transfer of a mixing process.

    Science.gov (United States)

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    2012-09-01

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for the ability to break up agglomerates in order to achieve the desired blend uniformity. Agglomerate break up is often an abrasion process. In this study, the abrasion rate potential of agglomerates is described by the Stokes abrasion (St(Abr)) number of the system. The St(Abr) number equals the ratio between the kinetic energy density of the moving powder bed and the work of fracture of the agglomerate. In this study, the St(Abr) approach demonstrates to be a useful tool to predict the abrasion of agglomerates during blending when technology is transferred between mixer scales/types. Applying the St(Abr) approach revealed a transition point between parameters that determined agglomerate abrasion. This study gave evidence that (1) below this transition point, agglomerate abrasion is determined by a combination of impeller effects and by the kinetic energy density of the powder blend, whereas (2) above this transition point, agglomerate abrasion is mainly determined by the kinetic energy density of the powder blend.

  13. Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces

    Science.gov (United States)

    Coti Zelati, Michele

    2018-02-01

    We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.

  14. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  15. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  16. Tensor representation techniques for full configuration interaction: A Fock space approach using the canonical product format.

    Science.gov (United States)

    Böhm, Karl-Heinz; Auer, Alexander A; Espig, Mike

    2016-06-28

    In this proof-of-principle study, we apply tensor decomposition techniques to the Full Configuration Interaction (FCI) wavefunction in order to approximate the wavefunction parameters efficiently and to reduce the overall computational effort. For this purpose, the wavefunction ansatz is formulated in an occupation number vector representation that ensures antisymmetry. If the canonical product format tensor decomposition is then applied, the Hamiltonian and the wavefunction can be cast into a multilinear product form. As a consequence, the number of wavefunction parameters does not scale to the power of the number of particles (or orbitals) but depends on the rank of the approximation and linearly on the number of particles. The degree of approximation can be controlled by a single threshold for the rank reduction procedure required in the algorithm. We demonstrate that using this approximation, the FCI Hamiltonian matrix can be stored with N(5) scaling. The error of the approximation that is introduced is below Millihartree for a threshold of ϵ = 10(-4) and no convergence problems are observed solving the FCI equations iteratively in the new format. While promising conceptually, all effort of the algorithm is shifted to the required rank reduction procedure after the contraction of the Hamiltonian with the coefficient tensor. At the current state, this crucial step is the bottleneck of our approach and even for an optimistic estimate, the algorithm scales beyond N(10) and future work has to be directed towards reduction-free algorithms.

  17. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  18. A multi-scale spatial approach to address environmental effects of small hydropower development.

    Science.gov (United States)

    McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C

    2015-01-01

    Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.

  19. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  20. A multi-scale approach for high cycle anisotropic fatigue resistance: Application to forged components

    International Nuclear Information System (INIS)

    Milesi, M.; Chastel, Y.; Hachem, E.; Bernacki, M.; Loge, R.E.; Bouchard, P.O.

    2010-01-01

    Forged components exhibit good mechanical strength, particularly in terms of high cycle fatigue properties. This is due to the specific microstructure resulting from large plastic deformation as in a forging process. The goal of this study is to account for critical phenomena such as the anisotropy of the fatigue resistance in order to perform high cycle fatigue simulations on industrial forged components. Standard high cycle fatigue criteria usually give good results for isotropic behaviors but are not suitable for components with anisotropic features. The aim is to represent explicitly this anisotropy at a lower scale compared to the process scale and determined local coefficients needed to simulate a real case. We developed a multi-scale approach by considering the statistical morphology and mechanical characteristics of the microstructure to represent explicitly each element. From stochastic experimental data, realistic microstructures were reconstructed in order to perform high cycle fatigue simulations on it with different orientations. The meshing was improved by a local refinement of each interface and simulations were performed on each representative elementary volume. The local mechanical anisotropy is taken into account through the distribution of particles. Fatigue parameters identified at the microscale can then be used at the macroscale on the forged component. The linkage of these data and the process scale is the fiber vector and the deformation state, used to calculate global mechanical anisotropy. Numerical results reveal an expected behavior compared to experimental tendencies. We proved numerically the dependence of the anisotropy direction and the deformation state on the endurance limit evolution.

  1. A multi-scaled approach to evaluating the fish assemblage structure within southern Appalachian streams USA.

    Science.gov (United States)

    Kirsch, Joseph; Peterson, James T.

    2014-01-01

    There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.

  2. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  3. Scale dependence of halo and galaxy bias: Effects in real space

    International Nuclear Information System (INIS)

    Smith, Robert E.; Scoccimarro, Roman; Sheth, Ravi K.

    2007-01-01

    We examine the scale dependence of dark matter halo and galaxy clustering on very large scales (0.01 -1 ] -1 ] -1 ], and only show amplification on smaller scales, whereas low mass haloes show strong, ∼5%-10%, suppression over the range 0.05 -1 ]<0.15. These results were primarily established through the use of the cross-power spectrum of dark matter and haloes, which circumvents the thorny issue of shot-noise correction. The halo-halo power spectrum, however, is highly sensitive to the shot-noise correction; we show that halo exclusion effects make this sub-Poissonian and a new correction is presented. Our results have special relevance for studies of the baryon acoustic oscillation features in the halo power spectra. Nonlinear mode-mode coupling: (i) damps these features on progressively larger scales as halo mass increases; (ii) produces small shifts in the positions of the peaks and troughs which depend on halo mass. We show that these effects on halo clustering are important over the redshift range relevant to such studies (0< z<2), and so will need to be accounted for when extracting information from precision measurements of galaxy clustering. Our analytic model is described in the language of the ''halo model.'' The halo-halo clustering term is propagated into the nonlinear regime using ''1-loop'' perturbation theory and a nonlinear halo bias model. Galaxies are then inserted into haloes through the halo occupation distribution. We show that, with nonlinear bias parameters derived from simulations, this model produces predictions that are qualitatively in agreement with our numerical results. We then use it to show that the power spectra of red and blue galaxies depend differently on scale, thus underscoring the fact that proper modeling of nonlinear bias parameters will be crucial to derive reliable cosmological constraints. In addition to showing that the bias on very large scales is not simply linear, the model also shows that the halo-halo and halo

  4. FOREWORD: Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach

    Science.gov (United States)

    Emmerich, H.

    2009-11-01

    Scope and aim of this volume. Nucleation and initial microstructure formation play an important role in almost all aspects of materials science [1-5]. The relevance of the prediction and control of nucleation and the subsequent microstructure formation is fully accepted across many areas of modern surface and materials science and technology. One reason is that a large range of material properties, from mechanical ones such as ductility and hardness to electrical and magnetic ones such as electric conductivity and magnetic hardness, depend largely on the specific crystalline structure that forms in nucleation and the subsequent initial microstructure growth. A very demonstrative example for the latter is the so called bamboo structure of an integrated circuit, for which resistance against electromigration [6] , a parallel alignment of grain boundaries vertical to the direction of electricity, is most favorable. Despite the large relevance of predicting and controlling nucleation and the subsequent microstructure formation, and despite significant progress in the experimental analysis of the later stages of crystal growth in line with new theoretical computer simulation concepts [7], details about the initial stages of solidification are still far from being satisfactorily understood. This is in particular true when the nucleation event occurs as heterogenous nucleation. The Priority Program SPP 1296 'Heterogenous Nucleation and Microstructure Formation—a Scale- and System-Bridging Approach' [8] sponsored by the German Research Foundation, DFG, intends to contribute to this open issue via a six year research program that enables approximately twenty research groups in Germany to work interdisciplinarily together following this goal. Moreover, it enables the participants to embed themselves in the international community which focuses on this issue via internationally open joint workshops, conferences and summer schools. An outline of such activities can be found

  5. Analysis and test for space shuttle propellant dynamics (1/10th scale model test results). Volume 1: Technical discussion

    Science.gov (United States)

    Berry, R. L.; Tegart, J. R.; Demchak, L. J.

    1979-01-01

    Space shuttle propellant dynamics during ET/Orbiter separation in the RTLS (return to launch site) mission abort sequence were investigated in a test program conducted in the NASA KC-135 "Zero G" aircraft using a 1/10th-scale model of the ET LOX Tank. Low-g parabolas were flown from which thirty tests were selected for evaluation. Data on the nature of low-g propellant reorientation in the ET LOX tank, and measurements of the forces exerted on the tank by the moving propellent will provide a basis for correlation with an analytical model of the slosh phenomenon.

  6. Large-Scale Testing and High-Fidelity Simulation Capabilities at Sandia National Laboratories to Support Space Power and Propulsion

    International Nuclear Information System (INIS)

    Dobranich, Dean; Blanchat, Thomas K.

    2008-01-01

    Sandia National Laboratories, as a Department of Energy, National Nuclear Security Agency, has major responsibility to ensure the safety and security needs of nuclear weapons. As such, with an experienced research staff, Sandia maintains a spectrum of modeling and simulation capabilities integrated with experimental and large-scale test capabilities. This expertise and these capabilities offer considerable resources for addressing issues of interest to the space power and propulsion communities. This paper presents Sandia's capability to perform thermal qualification (analysis, test, modeling and simulation) using a representative weapon system as an example demonstrating the potential to support NASA's Lunar Reactor System

  7. Infinite-mode squeezed coherent states and non-equilibrium statistical mechanics (phase-space-picture approach)

    International Nuclear Information System (INIS)

    Yeh, L.

    1992-01-01

    The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite- mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena

  8. A multifractal approach to space-filling recovery for PET quantification

    Energy Technology Data Exchange (ETDEWEB)

    Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O. [Comprehensive Cancer Imaging Centre, Imperial College London, Hammersmith Hospital, London W12 0NN (United Kingdom); Tsoumpas, Charalampos [Division of Medical Physics, University of Leeds, LS2 9JT (United Kingdom); Turkheimer, Federico E. [Department of Neuroimaging, Institute of Psychiatry, King’s College London, London SE5 8AF (United Kingdom)

    2014-11-01

    Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.

  9. High-Payoff Space Transportation Design Approach with a Technology Integration Strategy

    Science.gov (United States)

    McCleskey, C. M.; Rhodes, R. E.; Chen, T.; Robinson, J.

    2011-01-01

    A general architectural design sequence is described to create a highly efficient, operable, and supportable design that achieves an affordable, repeatable, and sustainable transportation function. The paper covers the following aspects of this approach in more detail: (1) vehicle architectural concept considerations (including important strategies for greater reusability); (2) vehicle element propulsion system packaging considerations; (3) vehicle element functional definition; (4) external ground servicing and access considerations; and, (5) simplified guidance, navigation, flight control and avionics communications considerations. Additionally, a technology integration strategy is forwarded that includes: (a) ground and flight test prior to production commitments; (b) parallel stage propellant storage, such as concentric-nested tanks; (c) high thrust, LOX-rich, LOX-cooled first stage earth-to-orbit main engine; (d) non-toxic, day-of-launch-loaded propellants for upper stages and in-space propulsion; (e) electric propulsion and aero stage control.

  10. Solar pumping of solid state lasers for space mission: a novel approach

    Science.gov (United States)

    Boetti, N. G.; Lousteau, J.; Negro, D.; Mura, E.; Scarpignato, G. C.; Perrone, G.; Milanese, D.; Abrate, S.

    2017-11-01

    Solar pumped laser (SPL) can find wide applications in space missions, especially for long lasting ones. In this paper a new technological approach for the realization of a SPL based on fiber laser technology is proposed. We present a preliminary study, focused on the active material performance evaluation, towards the realization of a Nd3+ -doped fiber laser made of phosphate glass materials, emitting at 1.06 μm. For this research several Nd3+ -doped phosphate glass samples were fabricated, with concentration of Nd3+ up to 10 mol%. Physical and thermal properties of the glasses were measured and their spectroscopic properties are described. The effect of Nd3+ doping concentration on emission spectra and lifetimes was investigated in order to study the concentration quenching effect on luminescence performance.

  11. Groups, matrices, and vector spaces a group theoretic approach to linear algebra

    CERN Document Server

    Carrell, James B

    2017-01-01

    This unique text provides a geometric approach to group theory and linear algebra, bringing to light the interesting ways in which these subjects interact. Requiring few prerequisites beyond understanding the notion of a proof, the text aims to give students a strong foundation in both geometry and algebra. Starting with preliminaries (relations, elementary combinatorics, and induction), the book then proceeds to the core topics: the elements of the theory of groups and fields (Lagrange's Theorem, cosets, the complex numbers and the prime fields), matrix theory and matrix groups, determinants, vector spaces, linear mappings, eigentheory and diagonalization, Jordan decomposition and normal form, normal matrices, and quadratic forms. The final two chapters consist of a more intensive look at group theory, emphasizing orbit stabilizer methods, and an introduction to linear algebraic groups, which enriches the notion of a matrix group. Applications involving symm etry groups, determinants, linear coding theory ...

  12. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  13. Phase-space description of wave packet approach to electronic transport in nanoscale systems

    International Nuclear Information System (INIS)

    Szydłowski, D; Wołoszyn, M; Spisak, B J

    2013-01-01

    The dynamics of conduction electrons in resonant tunnelling nanosystems is studied within the phase-space approach based on the Wigner distribution function. The time evolution of the distribution function is calculated from the time-dependent quantum kinetic equation for which an effective numerical method is presented. Calculations of the transport properties of a double-barrier resonant tunnelling diode are performed to illustrate the proposed techniques. Additionally, analysis of the transient effects in the nanosystem is carried out and it is shown that for some range of the bias voltage the temporal variations of electronic current can take negative values. The explanation of this effect is based on the analysis of the time changes of the Wigner distribution function. The decay time of the temporal current oscillations in the nanosystem as a function of the bias voltage is determined. (paper)

  14. Truncated Hilbert Space Approach for the 1+1D phi^4 Theory

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    (an informal seminar, not a regular string seminar) We used the massive analogue of the truncated conformal space approach to study the broken phase of the 1+1 dimensional scalar phi^4 model in finite volume, similarly to the work by S. Rychkov and L. Vitale. In our work, the finite size spectrum was determined numerically using an effective eigensolver routine, which was followed by a simple extrapolation in the cutoff energy. We analyzed both the periodic and antiperiodic sectors. The results were compared with semiclassical and Bethe-Yang results as well as perturbation theory. We obtained the coupling dependence of the infinite volume breather and kink masses for moderate couplings. The results fit well with semiclassics and perturbative estimations, and confirm the conjecture of Mussardo that at most two neutral excitations can exist in the spectrum. We believe that improving our method with the renormalization procedure of Rychkov et al. enables to measure further interesting quantities such as decay ra...

  15. A Conditional Fourier-Feynman Transform and Conditional Convolution Product with Change of Scales on a Function Space II

    Directory of Open Access Journals (Sweden)

    Dong Hyun Cho

    2017-01-01

    Full Text Available Using a simple formula for conditional expectations over continuous paths, we will evaluate conditional expectations which are types of analytic conditional Fourier-Feynman transforms and conditional convolution products of generalized cylinder functions and the functions in a Banach algebra which is the space of generalized Fourier transforms of the measures on the Borel class of L2[0,T]. We will then investigate their relationships. Particularly, we prove that the conditional transform of the conditional convolution product can be expressed by the product of the conditional transforms of each function. Finally we will establish change of scale formulas for the conditional transforms and the conditional convolution products. In these evaluation formulas and change of scale formulas, we use multivariate normal distributions so that the conditioning function does not contain present positions of the paths.

  16. A new approach for the evaluation of the effective electrode spacing in spherical ion chambers

    Energy Technology Data Exchange (ETDEWEB)

    Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Ionizing Radiation Metrology Laboratory, Tersa Street 12211, Giza P.O. Box: 136 (Egypt); Shqair, Mohammed [Physics Department, Faculty of Science and Humanities, Sattam Bin Abdul Aziz University, Alkharj (Saudi Arabia)

    2016-10-21

    Proper determination of the effective electrode spacing (d{sub eff}) of an ion chamber ensures proper determination of its collection efficiency either in continuous or in pulsed radiation in addition to the proper evaluation of the transit time. Boag's method for the determination of d{sub eff} assumes the spherical shape of the internal electrode of the spherical ion chambers which is not always true, except for some cases, its common shape is cylindrical. Current work provides a new approach for the evaluation of the effective electrode spacing in spherical ion chambers considering the cylindrical shape of the internal electrode. Results indicated that d{sub eff} values obtained through current work are less than those obtained using Boag's method by factors ranging from 12.1% to 26.9%. Current method also impacts the numerically evaluated collection efficiency (f) where values obtained differ by factors up to 3% at low potential (V) values while at high V values minor differences were noticed. Additionally, impacts on the evaluation of the transit time (τ{sub i}) were obtained. It is concluded that approximating the internal electrode as a sphere may result in false values of d{sub eff}, f, and τ{sub i}.

  17. Space station electrical power distribution analysis using a load flow approach

    Science.gov (United States)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  18. Ethical approach to digital skills. Sense and use in virtual educational spaces

    Directory of Open Access Journals (Sweden)

    Juan GARCÍA-GUTIÉRREZ

    2013-12-01

    Full Text Available In the context of technology and cyberspace, should we do everything we can do? The answer given to this question is not ethical, is political: safety. The safety and security are overshadowing the ethical question about the meaning of technology. Cyberspace imposes a "new logic" and new forms of "ownership". When it comes to the Internet in relation to children not always adopt logic of accountability to the cyberspace, Internet showing a space not only ethical and technical. We talk about safe Internet, Internet healthy, and Internet Fit for Children... why not talk over Internet ethics? With this work we approach digital skills as those skills that help us to position ourselves and guide us in cyberspace. Something that is not possible without also ethical skills. So, in this article we will try to build and propose a model for analyzing the virtual learning spaces (and cyberspace in general based on the categories of "use" and "sense" as different levels of ownership that indicate the types of competences needed to access cyberspace.  

  19. Scattering in quantum field theory: the M.P.S.A. approach in complex momentum space

    International Nuclear Information System (INIS)

    Bros, J.

    1981-02-01

    In this course, we intend to show how 'Many-Particle Structure Analysis' (M.P.S.A.) can be worked out in the standard field-theoretical framework, by using integral relations in complex momentum space involving 'l-particle irreducible kernels'. The ultimate purpose of this approach is to obtain the best possible knowledge of the singularities (location, nature, type of ramification) and of the ambient holomorphy (or meromorphy) domains of the n-point Green functions and scattering amplitudes, and at the same time to derive analytic structural equations for them which display the global organization of these singularities. The generation of Landau singularities for integrals and Fredholm resolvents, taken on cycles in complex space, will be explained on the basis of the Picard-Lefschetz formula (presented and used in simple situations). Among various results described, we present and analyse a structural equation for the six-point function (and for the 3 → 3 particle scattering function), valid in a domain containing the three-particle normal threshold

  20. Modeling solvation effects in real-space and real-time within density functional approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, Alain [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy); Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana (Cuba); Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy)

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  1. A novel method for creating working space during endoscopic thyroidectomy via bilateral areolar approach.

    Science.gov (United States)

    Tan, Yi-Hong; Du, Guo-Neng; Xiao, Yu-Gen; Qiu, Wan-Shou; Wu, Tao

    2013-12-01

    Endoscopic thyroidectomy (ET) can be performed through the bilateral areolar approach (BAA). A working space (WS) is typically created on the surface of the pectoral fascia in the chest wall and in the subplatysmal space in the neck. There are several limitations of using this WS. The aim of this study was to establish a new WS for ET. A retrospective review was performed on 85 patients with benign thyroid nodules who had undergone ET through a BAA. A WS was created between the anterior and poster layers of the superficial pectoral fascia (SPF) in the chest and underneath the deep layer of the investing layer (IL) in the neck. The time for creating the WS was 7.2 ± 2.1 (range, 5-12) minutes. No hemorrhage occurred during the procedure. Fat liquefaction occurred in 2 patients. Edema of the neck skin flap presented as lack of a suprasternal notch. No skin numbness occurred. No patient required postoperative pain medication. All patients were extremely satisfied with the cosmetic results. This new method of establishing a WS between the two layers of the SPF and underneath the IL is simple and fast, provides good exposure, yields less postoperative pain, and has a lower risk of skin burn.

  2. Large-scaled biomonitoring of trace-element air pollution: goals and approaches

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    2000-01-01

    Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)

  3. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  4. Prediction of scaling physics laws for proton acceleration with extended parameter space of the NIF ARC

    Science.gov (United States)

    Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy

    2017-10-01

    The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  5. Modelling airborne gravity data by means of adapted Space-Wise approach

    Science.gov (United States)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  6. Design Space Approach for Preservative System Optimization of an Anti-Aging Eye Fluid Emulsion.

    Science.gov (United States)

    Lourenço, Felipe Rebello; Francisco, Fabiane Lacerda; Ferreira, Márcia Regina Spuri; Andreoli, Terezinha De Jesus; Löbenberg, Raimar; Bou-Chacra, Nádia

    2015-01-01

    The use of preservatives must be optimized in order to ensure the efficacy of an antimicrobial system as well as the product safety. Despite the wide variety of preservatives, the synergistic or antagonistic effects of their combinations are not well established and it is still an issue in the development of pharmaceutical and cosmetic products. The purpose of this paper was to establish a space design using a simplex-centroid approach to achieve the lowest effective concentration of 3 preservatives (methylparaben, propylparaben, and imidazolidinyl urea) and EDTA for an emulsion cosmetic product. Twenty-two formulae of emulsion differing only by imidazolidinyl urea (A: 0.00 to 0.30% w/w), methylparaben (B: 0.00 to 0.20% w/w), propylparaben (C: 0.00 to 0.10% w/w) and EDTA (D: 0.00 to 0.10% w/w) concentrations were prepared. They were tested alone and in binary, ternary and quaternary combinations. Aliquots of these formulae were inoculated with several microorganisms. An electrochemical method was used to determine microbial burden immediately after inoculation and after 2, 4, 8, 12, 24, 48, and 168 h. An optimization strategy was used to obtain the concentrations of preservatives and EDTA resulting in a most effective preservative system of all microorganisms simultaneously. The use of preservatives and EDTA in combination has the advantage of exhibiting a potential synergistic effect against a wider spectrum of microorganisms. Based on graphic and optimization strategies, we proposed a new formula containing a quaternary combination (A: 55%; B: 30%; C: 5% and D: 10% w/w), which complies with the specification of a conventional challenge test. A design space approach was successfully employed in the optimization of concentrations of preservatives and EDTA in an emulsion cosmetic product.

  7. Quantum harmonic Brownian motion in a general environment: A modified phase-space approach

    International Nuclear Information System (INIS)

    Yeh, L.

    1993-01-01

    After extensive investigations over three decades, the linear-coupling model and its equivalents have become the standard microscopic models for quantum harmonic Brownian motion, in which a harmonically bound Brownian particle is coupled to a quantum dissipative heat bath of general type modeled by infinitely many harmonic oscillators. The dynamics of these models have been studied by many authors using the quantum Langevin equation, the path-integral approach, quasi-probability distribution functions (e.g., the Wigner function), etc. However, the quantum Langevin equation is only applicable to some special problems, while other approaches all involve complicated calculations due to the inevitable reduction (i.e., contraction) operation for ignoring/eliminating the degrees of freedom of the heat bath. In this dissertation, the author proposes an improved methodology via a modified phase-space approach which employs the characteristic function (the symplectic Fourier transform of the Wigner function) as the representative of the density operator. This representative is claimed to be the most natural one for performing the reduction, not only because of its simplicity but also because of its manifestation of geometric meaning. Accordingly, it is particularly convenient for studying the time evolution of the Brownian particle with an arbitrary initial state. The power of this characteristic function is illuminated through a detailed study of several physically interesting problems, including the environment-induced damping of quantum interference, the exact quantum Fokker-Planck equations, and the relaxation of non-factorizable initial states. All derivations and calculations axe shown to be much simplified in comparison with other approaches. In addition to dynamical problems, a novel derivation of the fluctuation-dissipation theorem which is valid for all quantum linear systems is presented

  8. Disease severity, not operative approach, drives organ space infection after pediatric appendectomy.

    Science.gov (United States)

    Kelly, Kristin N; Fleming, Fergal J; Aquina, Christopher T; Probst, Christian P; Noyes, Katia; Pegoli, Walter; Monson, John R T

    2014-09-01

    This study examines patient and operative factors associated with organ space infection (OSI) in children after appendectomy, specifically focusing on the role of operative approach. Although controversy exists regarding the risk of increased postoperative intra-abdominal infections after laparoscopic appendectomy, this approach has been largely adopted in the treatment of pediatric acute appendicitis. Children aged 2 to 18 years undergoing open or laparoscopic appendectomy for acute appendicitis were selected from the 2012 American College of Surgeons Pediatric National Surgical Quality Improvement Program database. Univariate analysis compared patient and operative characteristics with 30-day OSI and incisional complication rates. Factors with a P value of less than 0.1 and clinical importance were included in the multivariable logistic regression models. A P value less than 0.05 was considered significant. For 5097 children undergoing appendectomy, 4514 surgical procedures (88.6%) were performed laparoscopically. OSI occurred in 155 children (3%), with half of these infections developing postdischarge. Significant predictors for OSI included complicated appendicitis, preoperative sepsis, wound class III/IV, and longer operative time. Although 5.2% of patients undergoing open surgery developed OSI (odds ratio = 1.82; 95% confidence interval, 1.21-2.76; P = 0.004), operative approach was not associated with increased relative odds of OSI (odds ratio = 0.99; confidence interval, 0.64-1.55; P = 0.970) after adjustment for other risk factors. Overall, the model had excellent predictive ability (c-statistic = 0.837). This model suggests that disease severity, not operative approach, as previously suggested, drives OSI development in children. Although 88% of appendectomies in this population were performed laparoscopically, these findings support utilization of the surgeon's preferred surgical technique and may help guide postoperative counsel in high-risk children.

  9. Serbian translation of the 20-item toronto alexithymia scale: Psychometric properties and the new methodological approach in translating scales

    Directory of Open Access Journals (Sweden)

    Trajanović Nikola N.

    2013-01-01

    Full Text Available Introduction. Since inception of the alexithymia construct in 1970’s, there has been a continuous effort to improve both its theoretical postulates and the clinical utility through development, standardization and validation of assessment scales. Objective. The aim of this study was to validate the Serbian translation of the 20-item Toronto Alexithymia Scale (TAS-20 and to propose a new method of translation of scales with a property of temporal stability. Methods. The scale was expertly translated by bilingual medical professionals and a linguist, and given to a sample of bilingual participants from the general population who completed both the English and the Serbian version of the scale one week apart. Results. The findings showed that the Serbian version of the TAS-20 had a good internal consistency reliability regarding total scale (α=0.86, and acceptable reliability of the three factors (α=0.71-0.79. Conclusion. The analysis confirmed the validity and consistency of the Serbian translation of the scale, with observed weakness of the factorial structure consistent with studies in other languages. The results also showed that the method of utilizing a self-control bilingual subject is a useful alternative to the back-translation method, particularly in cases of linguistically and structurally sensitive scales, or in cases where a larger sample is not available. This method, dubbed as ‘forth-translation’, could be used to translate psychometric scales measuring properties which have temporal stability over the period of at least several weeks.

  10. Hybrid approaches to nanometer-scale patterning: Exploiting tailored intermolecular interactions

    International Nuclear Information System (INIS)

    Mullen, Thomas J.; Srinivasan, Charan; Shuster, Mitchell J.; Horn, Mark W.; Andrews, Anne M.; Weiss, Paul S.

    2008-01-01

    In this perspective, we explore hybrid approaches to nanometer-scale patterning, where the precision of molecular self-assembly is combined with the sophistication and fidelity of lithography. Two areas - improving existing lithographic techniques through self-assembly and fabricating chemically patterned surfaces - will be discussed in terms of their advantages, limitations, applications, and future outlook. The creation of such chemical patterns enables new capabilities, including the assembly of biospecific surfaces to be recognized by, and to capture analytes from, complex mixtures. Finally, we speculate on the potential impact and upcoming challenges of these hybrid strategies.

  11. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  12. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  13. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  14. A simulation based optimization approach to model and design life support systems for manned space missions

    Science.gov (United States)

    Aydogan, Selen

    This dissertation considers the problem of process synthesis and design of life-support systems for manned space missions. A life-support system is a set of technologies to support human life for short and long-term spaceflights, via providing the basic life-support elements, such as oxygen, potable water, and food. The design of the system needs to meet the crewmember demand for the basic life-support elements (products of the system) and it must process the loads generated by the crewmembers. The system is subject to a myriad of uncertainties because most of the technologies involved are still under development. The result is high levels of uncertainties in the estimates of the model parameters, such as recovery rates or process efficiencies. Moreover, due to the high recycle rates within the system, the uncertainties are amplified and propagated within the system, resulting in a complex problem. In this dissertation, two algorithms have been successfully developed to help making design decisions for life-support systems. The algorithms utilize a simulation-based optimization approach that combines a stochastic discrete-event simulation and a deterministic mathematical programming approach to generate multiple, unique realizations of the controlled evolution of the system. The timelines are analyzed using time series data mining techniques and statistical tools to determine the necessary technologies, their deployment schedules and capacities, and the necessary basic life-support element amounts to support crew life and activities for the mission duration.

  15. Application of a Systems Engineering Approach to Support Space Reactor Development

    International Nuclear Information System (INIS)

    Wold, Scott

    2005-01-01

    In 1992, approximately 25 Russian and 12 U.S. engineers and technicians were involved in the transport, assembly, inspection, and testing of over 90 tons of Russian equipment associated with the Thermionic System Evaluation Test (TSET) Facility. The entire Russian Baikal Test Stand, consisting of a 5.79 m tall vacuum chamber and related support equipment, was reassembled and tested at the TSET facility in less than four months. In November 1992, the first non-nuclear operational test of a complete thermionic power reactor system in the U.S. was accomplished three months ahead of schedule and under budget. A major factor in this accomplishment was the application of a disciplined top-down systems engineering approach and application of a spiral development model to achieve the desired objectives of the TOPAZ International Program (TIP). Systems Engineering is a structured discipline that helps programs and projects conceive, develop, integrate, test and deliver products and services that meet customer requirements within cost and schedule. This paper discusses the impact of Systems Engineering and a spiral development model on the success of the TOPAZ International Program and how the application of a similar approach could help ensure the success of future space reactor development projects

  16. Revealing the correlation between real-space structure and chiral magnetic order at the atomic scale

    Science.gov (United States)

    Hauptmann, Nadine; Dupé, Melanie; Hung, Tzu-Chao; Lemmens, Alexander K.; Wegner, Daniel; Dupé, Bertrand; Khajetoorians, Alexander A.

    2018-03-01

    We image simultaneously the geometric, the electronic, and the magnetic structures of a buckled iron bilayer film that exhibits chiral magnetic order. We achieve this by combining spin-polarized scanning tunneling microscopy and magnetic exchange force microscopy (SPEX) to independently characterize the geometric as well as the electronic and magnetic structures of nonflat surfaces. This new SPEX imaging technique reveals the geometric height corrugation of the reconstruction lines resulting from strong strain relaxation in the bilayer, enabling the decomposition of the real-space from the electronic structure at the atomic level and the correlation with the resultant spin-spiral ground state. By additionally utilizing adatom manipulation, we reveal the chiral magnetic ground state of portions of the unit cell that were not previously imaged with spin-polarized scanning tunneling microscopy alone. Using density functional theory, we investigate the structural and electronic properties of the reconstructed bilayer and identify the favorable stoichiometry regime in agreement with our experimental result.

  17. Urban open spaces: the relationship of uses within the neighborhood scale

    Directory of Open Access Journals (Sweden)

    Wilza Gomes Reis Lopes

    2007-06-01

    Full Text Available In the urban habitacionais areas the population needs areas with which they can be identified, of free spaces that reflect the identity of its inhabitants. Thus, the square is had as public good of the cities, opened to all and endowed with functions, that supply the physical and mental necessities of its inhabitants. This work had as objective to present the diverse joined uses and the way as the users if they appropriate of the squares in the Mocambinho quarter located in the zone north of the city of Teresina, its Inter-relations in the context where they are inserted.

  18. Life-Space Assessment scale to assess mobility: validation in Latin American older women and men.

    Science.gov (United States)

    Curcio, Carmen-Lucia; Alvarado, Beatriz E; Gomez, Fernando; Guerra, Ricardo; Guralnik, Jack; Zunzunegui, Maria Victoria

    2013-10-01

    The Life-Space Assessment (LSA) instrument of the University of Alabama and Birmingham study is a useful and innovative measure of mobility in older populations. The purpose of this article was to assess the reliability, construct and convergent validity of the LSA in Latin American older populations. In a cross-sectional study, a total of 150 women and 150 men, aged 65-74 years, were recruited from seniors' community centers in Manizales, Colombia and Natal, Brazil. The LSA questionnaire summarizes where people travel (5 levels from room to places outside of town), how often and any assistance needed. Four LSA variables were obtained according to the maximum life space achieved and the level of independence. As correlates of LSA, education, perception of income sufficiency, depression, cognitive function, and functional measures (objective and subjectively measured) were explored. The possible modifying effect of the city on correlates of LSA was examined. Reliability for the composite LSA score was substantial (ICC = 0.70; 95 % CI 0.49-0.83) in Manizales. Average levels of LSA scores were higher in those with better functional performance and those who reported less mobility difficulties. Low levels of education, insufficient income, depressive symptoms, and low scores of cognitive function were all significantly related to lower LSA scores. Women in both cities were more likely to be restricted to their neighborhood and had lower LSA scores. This study provides evidence for the validity of LSA in two Latin American populations. Our results suggest that LSA is a good measure of mobility that reflects the interplay of physical functioning with gender and the social and physical environment.

  19. A large-scale view of Space Technology 5 magnetometer response to solar wind drivers.

    Science.gov (United States)

    Knipp, D J; Kilcommons, L M; Gjerloev, J; Redmon, R J; Slavin, J; Le, G

    2015-04-01

    In this data report we discuss reprocessing of the Space Technology 5 (ST5) magnetometer database for inclusion in NASA's Coordinated Data Analysis Web (CDAWeb) virtual observatory. The mission consisted of three spacecraft flying in elliptical orbits, from 27 March to 27 June 2006. Reprocessing includes (1) transforming the data into the Modified Apex Coordinate System for projection to a common reference altitude of 110 km, (2) correcting gain jumps, and (3) validating the results. We display the averaged magnetic perturbations as a keogram, which allows direct comparison of the full-mission data with the solar wind values and geomagnetic indices. With the data referenced to a common altitude, we find the following: (1) Magnetic perturbations that track the passage of corotating interaction regions and high-speed solar wind; (2) unexpectedly strong dayside perturbations during a solstice magnetospheric sawtooth oscillation interval characterized by a radial interplanetary magnetic field (IMF) component that may have enhanced the accompanying modest southward IMF; and (3) intervals of reduced magnetic perturbations or "calms," associated with periods of slow solar wind, interspersed among variable-length episodic enhancements. These calms are most evident when the IMF is northward or projects with a northward component onto the geomagnetic dipole. The reprocessed ST5 data are in very good agreement with magnetic perturbations from the Defense Meteorological Satellite Program (DMSP) spacecraft, which we also map to 110 km. We briefly discuss the methods used to remap the ST5 data and the means of validating the results against DMSP. Our methods form the basis for future intermission comparisons of space-based magnetometer data.

  20. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  1. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  2. The ESI scale, an ethical approach to the evaluation of seismic hazards

    Science.gov (United States)

    Porfido, Sabina; Nappi, Rosa; De Lucia, Maddalena; Gaudiosi, Germana; Alessio, Giuliana; Guerrieri, Luca

    2015-04-01

    The dissemination of correct information about seismic hazard is an ethical duty of scientific community worldwide. A proper assessment of a earthquake severity and impact should not ignore the evaluation of its intensity, taking into account both the effects on humans, man-made structures, as well as on the natural evironment. We illustrate the new macroseismic scale that measures the intensity taking into account the effects of earthquakes on the environment: the ESI 2007 (Environmental Seismic Intensity) scale (Michetti et al., 2007), ratified by the INQUA (International Union for Quaternary Research) during the XVII Congress in Cairns (Australia). The ESI scale integrates and completes the traditional macroseismic scales, of which it represents the evolution, allowing to assess the intensity parameter also where buildings are absent or damage-based diagnostic elements saturate. Each degree reflects the corresponding strength of an earthquake and the role of ground effects, evaluating the Intensity on the basis of the characteristics and size of primary (e.g. surface faulting and tectonic uplift/subsidence) and secondary effects (e.g. ground cracks, slope movements, liquefaction phenomena, hydrological changes, anomalous waves, tsunamis, trees shaking, dust clouds and jumping stones). This approach can be considered "ethical" because helps to define the real scenario of an earthquake, regardless of the country's socio-economic conditions and level of development. Here lies the value and the relevance of macroseismic scales even today, one hundred years after the death of Giuseppe Mercalli, who conceived the homonymous scale for the evaluation of earthquake intensity. For an appropriate mitigation strategy in seismic areas, it is fundamental to consider the role played by seismically induced effects on ground, such as active faults (size in length and displacement) and secondary effects (the total area affecting). With these perspectives two different cases

  3. The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints.

    Science.gov (United States)

    Wei, Chia-Hung; Li, Yue; Gwo, Chih-Ying

    2013-05-01

    Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale-invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross-correlation between the query image and each shoeprint image in the database. Experimental results show that full-size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints. © 2013 American Academy of Forensic Sciences.

  4. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  5. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China.

    Science.gov (United States)

    Hu, Maochuan; Sayama, Takahiro; Zhang, Xingqi; Tanaka, Kenji; Takara, Kaoru; Yang, Hong

    2017-05-15

    Low impact development (LID) has attracted growing attention as an important approach for urban flood mitigation. Most studies evaluating LID performance for mitigating floods focus on the changes of peak flow and runoff volume. This paper assessed the performance of LID practices for mitigating flood inundation hazards as retrofitting technologies in an urbanized watershed in Nanjing, China. The findings indicate that LID practices are effective for flood inundation mitigation at the watershed scale, and especially for reducing inundated areas with a high flood hazard risk. Various scenarios of LID implementation levels can reduce total inundated areas by 2%-17% and areas with a high flood hazard level by 6%-80%. Permeable pavement shows better performance than rainwater harvesting against mitigating urban waterlogging. The most efficient scenario is combined rainwater harvesting on rooftops with a cistern capacity of 78.5 mm and permeable pavement installed on 75% of non-busy roads and other impervious surfaces. Inundation modeling is an effective approach to obtaining the information necessary to guide decision-making for designing LID practices at watershed scales. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Applying the system engineering approach to devise a master’s degree program in space technology in developing countries

    Science.gov (United States)

    Jazebizadeh, Hooman; Tabeshian, Maryam; Taheran Vernoosfaderani, Mahsa

    2010-11-01

    Although more than half a century is passed since space technology was first developed, developing countries are just beginning to enter the arena, focusing mainly on educating professionals. Space technology by itself is an interdisciplinary science, is costly, and developing at a fast pace. Moreover, a fruitful education system needs to remain dynamic if the quality of education is the main concern, making it a complicated system. This paper makes use of the System Engineering Approach and the experiences of developed countries in this area while incorporating the needs of the developing countries to devise a comprehensive program in space engineering at the Master's level. The needs of the developing countries as regards space technology education may broadly be put into two categories: to raise their knowledge of space technology which requires hard work and teamwork skills, and to transfer and domesticate space technology while minimizing the costs and maximizing its effectiveness. The requirements of such space education system, which include research facilities, courses, and student projects are then defined using a model drawn from the space education systems in universities in North America and Europe that has been modified to include the above-mentioned needs. Three design concepts have been considered and synthesized through functional analysis. The first one is Modular and Detail Study which helps students specialize in a particular area in space technology. Second is referred to as Integrated and Interdisciplinary Study which focuses on understanding and development of space systems. Finally, the third concept which has been chosen for the purpose of this study, is a combination of the other two, categorizing the required curriculum into seven modules, setting aside space applications. This helps students to not only specialize in one of these modules but also to get hands-on experience in a real space project through participation in summer group

  7. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  8. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  9. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  10. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  11. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.

    Science.gov (United States)

    Basu, Niladri; Renne, Elisha P; Long, Rachel N

    2015-09-17

    Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  12. Wine consumers’ preferences in Spain: an analysis using the best-worst scaling approach

    Directory of Open Access Journals (Sweden)

    Tiziana de-Magistris

    2014-06-01

    Full Text Available Research on wine consumers’ preferences has largely been explored in the academic literature and the importance of wine attributes has been measured by rating or ranking scales. However, the most recent literature on wine preferences has applied the best-worst scaling approach to avoid the biased outcomes derived from using rating or ranking scales in surveys. This study investigates premium red wine consumers’ preferences in Spain by applying best-worst alternatives. To achieve this goal, a random parameter logit model is applied to assess the impacts of wine attributes on the probability of choosing premium quality red wine by using data from an ad-hoc survey conducted in a medium-sized Spanish city. The results suggest that some wine attributes related to past experience (i.e. it matches food followed by some related to personal knowledge (i.e. the designation of origin are valued as the most important, whereas other attributes related to the image of the New World (i.e. label or brand name are perceived as the least important or indifferent.

  13. A cross-scale approach to understand drought-induced variability of sagebrush ecosystem productivity

    Science.gov (United States)

    Assal, T.; Anderson, P. J.

    2016-12-01

    Sagebrush (Artemisia spp.) mortality has recently been reported in the Upper Green River Basin (Wyoming, USA) of the sagebrush steppe of western North America. Numerous causes have been suggested, but recent drought (2012-13) is the likely mechanism of mortality in this water-limited ecosystem which provides critical habitat for many species of wildlife. An understanding of the variability in patterns of productivity with respect to climate is essential to exploit landscape scale remote sensing for detection of subtle changes associated with mortality in this sparse, uniformly vegetated ecosystem. We used the standardized precipitation index to characterize drought conditions and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery (250-m resolution) to characterize broad characteristics of growing season productivity. We calculated per-pixel growing season anomalies over a 16-year period (2000-2015) to identify the spatial and temporal variability in productivity. Metrics derived from Landsat satellite imagery (30-m resolution) were used to further investigate trends within anomalous areas at local scales. We found evidence to support an initial hypothesis that antecedent winter drought was most important in explaining reduced productivity. The results indicate drought effects were inconsistent over space and time. MODIS derived productivity deviated by more than four standard deviations in heavily impacted areas, but was well within the interannual variability in other areas. Growing season anomalies highlighted dramatic declines in productivity during the 2012 and 2013 growing seasons. However, large negative anomalies persisted in other areas during the 2014 growing season, indicating lag effects of drought. We are further investigating if the reduction in productivity is mediated by local biophysical properties. Our analysis identified spatially explicit patterns of ecosystem properties altered by severe drought which are consistent with

  14. "Non-cold" dark matter at small scales: a general approach

    Science.gov (United States)

    Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.

    2017-11-01

    Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.

  15. A watershed-scale goals approach to assessing and funding wastewater infrastructure.

    Science.gov (United States)

    Rahm, Brian G; Vedachalam, Sridhar; Shen, Jerry; Woodbury, Peter B; Riha, Susan J

    2013-11-15

    Capital needs during the next twenty years for public wastewater treatment, piping, combined sewer overflow correction, and storm-water management are estimated to be approximately $300 billion for the USA. Financing these needs is a significant challenge, as Federal funding for the Clean Water Act has been reduced by 70% during the last twenty years. There is an urgent need for new approaches to assist states and other decision makers to prioritize wastewater maintenance and improvements. We present a methodology for performing an integrated quantitative watershed-scale goals assessment for sustaining wastewater infrastructure. We applied this methodology to ten watersheds of the Hudson-Mohawk basin in New York State, USA that together are home to more than 2.7 million people, cover 3.5 million hectares, and contain more than 36,000 km of streams. We assembled data on 183 POTWs treating approximately 1.5 million m(3) of wastewater per day. For each watershed, we analyzed eight metrics: Growth Capacity, Capacity Density, Soil Suitability, Violations, Tributary Length Impacted, Tributary Capital Cost, Volume Capital Cost, and Population Capital Cost. These metrics were integrated into three goals for watershed-scale management: Tributary Protection, Urban Development, and Urban-Rural Integration. Our results demonstrate that the methodology can be implemented using widely available data, although some verification of data is required. Furthermore, we demonstrate substantial differences in character, need, and the appropriateness of different management strategies among the ten watersheds. These results suggest that it is feasible to perform watershed-scale goals assessment to augment existing approaches to wastewater infrastructure analysis and planning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Space-Time Dynamics of Soil Moisture and Temperature: Scale issues

    Science.gov (United States)

    Mohanty, Binayak P.; Miller, Douglas A.; Th.vanGenuchten, M.

    2003-01-01

    The goal of this project is to gain further understanding of soil moisture/temperature dynamics at different spatio-temporal scales and physical controls/parameters.We created a comprehensive GIS database, which has been accessed extensively by NASA Land Surface Hydrology investigators (and others), is located at the following URL: http://www.essc.psu.edu/nasalsh. For soil moisture field experiments such as SGP97, SGP99, SMEX02, and SMEX03, cartographic products were designed for multiple applications, both pre- and post-mission. Premission applications included flight line planning and field operations logistics, as well as general insight into the extent and distribution of soil, vegetation, and topographic properties for the study areas. The cartographic products were created from original spatial information resources that were imported into Adobe Illustrator, where the maps were created and PDF versions were made for distribution and download.

  17. Across Space and Time: Social Responses to Large-Scale Biophysical Systems

    Science.gov (United States)

    Macmynowski, Dena P.

    2007-06-01

    The conceptual rubric of ecosystem management has been widely discussed and deliberated in conservation biology, environmental policy, and land/resource management. In this paper, I argue that two critical aspects of the ecosystem management concept require greater attention in policy and practice. First, although emphasis has been placed on the “space” of systems, the “time”—or rates of change—associated with biophysical and social systems has received much less consideration. Second, discussions of ecosystem management have often neglected the temporal disconnects between changes in biophysical systems and the response of social systems to management issues and challenges. The empirical basis of these points is a case study of the “Crown of the Continent Ecosystem,” an international transboundary area of the Rocky Mountains that surrounds Glacier National Park (USA) and Waterton Lakes National Park (Canada). This project assessed the experiences and perspectives of 1) middle- and upper-level government managers responsible for interjurisdictional cooperation, and 2) environmental nongovernment organizations with an international focus. I identify and describe 10 key challenges to increasing the extent and intensity of transboundary cooperation in land/resource management policy and practice. These issues are discussed in terms of their political, institutional, cultural, information-based, and perceptual elements. Analytic techniques include a combination of environmental history, semistructured interviews with 48 actors, and text analysis in a systematic qualitative framework. The central conclusion of this work is that the rates of response of human social systems must be better integrated with the rates of ecological change. This challenge is equal to or greater than the well-recognized need to adapt the spatial scale of human institutions to large-scale ecosystem processes and transboundary wildlife.

  18. Phase-space densities and effects of resonance decays in a hydrodynamic approach to heavy ion collisions

    International Nuclear Information System (INIS)

    Akkelin, S.V.; Sinyukov, Yu.M.

    2004-01-01

    A method allowing analysis of the overpopulation of phase space in heavy ion collisions in a model-independent way is proposed within the hydrodynamic approach. It makes it possible to extract a chemical potential of thermal pions at freeze-out, irrespective of the form of freeze-out (isothermal) hypersurface in Minkowski space and transverse flows on it. The contributions of resonance (with masses up to 2 GeV) decays to spectra, interferometry volumes, and phase-space densities are calculated and discussed in detail. The estimates of average phase-space densities and chemical potentials of thermal pions are obtained for SPS and RHIC energies. They demonstrate that multibosonic phenomena at those energies might be considered as a correction factor rather than as a significant physical effect. The analysis of the evolution of the pion average phase-space density in chemically frozen hadron systems shows that it is almost constant or slightly increases with time while the particle density and phase-space density at each space point decreases rapidly during the system's expansion. We found that, unlike the particle density, the average phase-space density has no direct link to the freeze-out criterion and final thermodynamic parameters, being connected rather to the initial phase-space density of hadronic matter formed in relativistic nucleus-nucleus collisions

  19. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  20. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    Science.gov (United States)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  1. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  2. Large-Scale Urban Projects, Production of Space and Neo-liberal Hegemony: A Comparative Study of Izmir

    Directory of Open Access Journals (Sweden)

    Mehmet PENPECİOĞLU

    2013-04-01

    Full Text Available With the rise of neo-liberalism, large-scale urban projects (LDPs have become a powerful mechanism of urban policy. Creating spaces of neo-liberal urbanization such as central business districts, tourism centers, gated residences and shopping malls, LDPs play a role not only in the reproduction of capital accumulation relations but also in the shift of urban political priorities towards the construction of neo-liberal hegemony. The construction of neo-liberal hegemony and the role played by LDPs in this process could not only be investigated by the analysis of capital accumulation. For such an investigation; the role of state and civil society actors in LDPs, their collaborative and conflictual relationships should be researched and their functions in hegemony should be revealed. In the case of Izmir’s two LDPs, namely the New City Center (NCC and Inciraltı Tourism Center (ITC projects, this study analyzes the relationship between the production of space and neo-liberal hegemony. In the NCC project, local governments, investors, local capital organizations and professional chambers collaborated and disseminated hegemonic discourse, which provided social support for the project. Through these relationships and discourses, the NCC project has become a hegemonic project for producing space and constructed neo-liberal hegemony over urban political priorities. In contrast to the NCC project, the ITC project saw no collaboration between state and organized civil society actors. The social opposition against the ITC project, initiated by professional chambers, has brought legal action against the ITC development plans in order to prevent their implementation. As a result, the ITC project did not acquire the consent of organized social groups and failed to become a hegemonic project for producing space.

  3. Scaling laws for oxygen transport across the space-filling system of respiratory membranes in the human lung

    Science.gov (United States)

    Hou, Chen

    Space-filling fractal surfaces play a fundamental role in how organisms function at various levels and in how structure determines function at different levels. In this thesis, we develop a quantitative theory of oxygen transport to and across the surface of the highly branched, space-filling system of alveoli, the fundamental gas exchange unit (acinar airways), in the human lung. Oxygen transport in the acinar airways is by diffusion, and we treat the two steps---diffusion through the branched airways, and transfer across the alveolar membranes---as a stationary diffusion-reaction problem, taking into account that there may be steep concentration gradients between the entrance and remote alveoli (screening). We develop a renormalization treatment of this screening effect and derive an analytic formula for the oxygen current across the cumulative alveolar membrane surface, modeled as a fractal, space-filling surface. The formula predicts the current from a minimum of morphological data of the acinus and appropriate values of the transport parameters, through a number of power laws (scaling laws). We find that the lung at rest operates near the borderline between partial screening and no screening; that it switches to no screening under exercise; and that the computed currents agree with measured values within experimental uncertainties. From an analysis of the computed current as a function of membrane permeability, we find that the space-filling structure of the gas exchanger is simultaneously optimal with respect to five criteria. The exchanger (i) generates a maximum oxygen current at minimum permeability; (ii) 'wastes' a minimum of surface area; (iii) maintains a minimum residence time of oxygen in the acinar airways; (iv) has a maximum fault tolerance to loss of permeability; and (v) generates a maximum current increase when switching from rest to exercise.

  4. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  5. Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.

    Science.gov (United States)

    Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David

    2017-08-29

    Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl - ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl - concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl - distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.

  6. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    Science.gov (United States)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  7. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  8. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  9. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  10. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    Science.gov (United States)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  11. Real-space local polynomial basis for solid-state electronic-structure calculations: A finite-element approach

    International Nuclear Information System (INIS)

    Pask, J.E.; Klein, B.M.; Fong, C.Y.; Sterne, P.A.

    1999-01-01

    We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate ab initio calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. copyright 1999 The American Physical Society

  12. An Interdisciplinary Approach to Developing Renewable Energy Mixes at the Community Scale

    Science.gov (United States)

    Gormally, Alexandra M.; Whyatt, James D.; Timmis, Roger J.; Pooley, Colin G.

    2013-04-01

    Renewable energy has risen on the global political agenda due to concerns over climate change and energy security. The European Union (EU) currently has a target of 20% renewable energy by the year 2020 and there is increasing focus on the ways in which these targets can be achieved. Here we focus on the UK context which could be considered to be lagging behind other EU countries in terms of targets and implementation. The UK has a lower overall target of 15% renewable energy by 2020 and in 2011 reached only 3.8 % (DUKES, 2012), one of the lowest progressions compared to other EU Member States (European Commission, 2012). The reticence of the UK to reach such targets could in part be due to their dependence on their current energy mix and a highly centralised electricity grid system, which does not lend itself easily to the adoption of renewable technologies. Additionally, increasing levels of demand and the need to raise energy awareness are key concerns in terms of achieving energy security in the UK. There is also growing concern from the public about increasing fuel and energy bills. One possible solution to some of these problems could be through the adoption of small-scale distributed renewable schemes implemented at the community-scale with local ownership or involvement, for example, through energy co-operatives. The notion of the energy co-operative is well understood elsewhere in Europe but unfamiliar to many UK residents due to its centralised approach to energy provision. There are many benefits associated with engaging in distributed renewable energy systems. In addition to financial benefits, participation may raise energy awareness and can lead to positive responses towards renewable technologies. Here we briefly explore how a mix of small-scale renewables, including wind, hydro-power and solar PV, have been implemented and managed by a small island community in the Scottish Hebrides to achieve over 90% of their electricity needs from renewable

  13. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    Science.gov (United States)

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  14. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  15. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  16. EMAPS: An Efficient Multiscale Approach to Plasma Systems with Non-MHD Scale Effects

    Energy Technology Data Exchange (ETDEWEB)

    Omelchenko, Yuri A. [Trinum Research, Inc., San Diego, CA (United States)

    2016-08-08

    Global interactions of energetic ions with magnetoplasmas and neutral gases lie at the core of many space and laboratory plasma phenomena ranging from solar wind entry into and transport within planetary magnetospheres and exospheres to fast-ion driven instabilities in fusion devices to astrophysics-in-lab experiments. The ability of computational models to properly account for physical effects that underlie such interactions, namely ion kinetic, ion cyclotron, Hall, collisional and ionization processes is important for the success and planning of experimental research in plasma physics. Understanding the physics of energetic ions, in particular their nonlinear resonance interactions with Alfvén waves, is central to improving the heating performance of magnetically confined plasmas for future energy generation. Fluid models are not adequate for high-beta plasmas as they cannot fully capture ion kinetic and cyclotron physics (e.g., ion behavior in the presence of magnetic nulls, shock structures, plasma interpenetration, etc.). Recent results from global reconnection simulations show that even in a MHD-like regime there may be significant differences between kinetic and MHD simulations. Therefore, kinetic modeling becomes essential for meeting modern day challenges in plasma physics. The hybrid approximation is an intermediate approximation between the fluid and fully kinetic approximations. It eliminates light waves, removes the electron inertial temporal and spatial scales from the problem and enables full-orbit ion kinetics. As a result, hybrid codes have become effective tools for exploring ion-scale driven phenomena associated with ion beams, shocks, reconnection and turbulence that control the large-scale behavior of laboratory and space magnetoplasmas. A number of numerical issues, however, make three-dimensional (3D) large-scale hybrid simulations of inhomogeneous magnetized plasmas prohibitively expensive or even impossible. To resolve these difficulties

  17. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation