WorldWideScience

Sample records for efficient space-time geostatistical

  1. Application of a computationally efficient geostatistical approach to characterizing variably spaced water-table data

    International Nuclear Information System (INIS)

    Quinn, J.J.

    1996-01-01

    Geostatistical analysis of hydraulic head data is useful in producing unbiased contour plots of head estimates and relative errors. However, at most sites being characterized, monitoring wells are generally present at different densities, with clusters of wells in some areas and few wells elsewhere. The problem that arises when kriging data at different densities is in achieving adequate resolution of the grid while maintaining computational efficiency and working within software limitations. For the site considered, 113 data points were available over a 14-mi 2 study area, including 57 monitoring wells within an area of concern of 1.5 mi 2 . Variogram analyses of the data indicate a linear model with a negligible nugget effect. The geostatistical package used in the study allows a maximum grid of 100 by 100 cells. Two-dimensional kriging was performed for the entire study area with a 500-ft grid spacing, while the smaller zone was modeled separately with a 100-ft spacing. In this manner, grid cells for the dense area and the sparse area remained small relative to the well separation distances, and the maximum dimensions of the program were not exceeded. The spatial head results for the detailed zone were then nested into the regional output by use of a graphical, object-oriented database that performed the contouring of the geostatistical output. This study benefitted from the two-scale approach and from very fine geostatistical grid spacings relative to typical data separation distances. The combining of the sparse, regional results with those from the finer-resolution area of concern yielded contours that honored the actual data at every measurement location. The method applied in this study can also be used to generate reproducible, unbiased representations of other types of spatial data

  2. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  3. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    Science.gov (United States)

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  4. Geostatistical Spatio-Time model of crime in el Salvador: Structural and Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Welman Rosa Alvarado

    2011-07-01

    Full Text Available Today, to study a geospatial and spatio-temporal phenomena requires searching statistical tools that enable the analysis of the dependency of space, time and interactions. The science that studies this kind of subjects is the Geoestatics which the goal is to predict spatial phenomenon. This science is considered the base for modeling phenomena that involves interactions between space and time. In the past 10 years, the Geostatistic had seen a great development in areas like the geology, soils, remote sensing, epidemiology, agriculture, ecology, economy, etc. In this research, the geostatistic had been apply to build a predictive map about crime in El Salvador; for that the variability of space and time together is studied to generate crime scenarios: crime hot spots are determined, crime vulnerable groups are identified, to improve political decisions and facilitate to decision makers about the insecurity in the country.

  5. Improving imperfect data from health management information systems in Africa using space-time geostatistics.

    Directory of Open Access Journals (Sweden)

    Peter W Gething

    2006-06-01

    Full Text Available Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens.This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale.The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels.

  6. Geostatistical models for air pollution

    International Nuclear Information System (INIS)

    Pereira, M.J.; Soares, A.; Almeida, J.; Branquinho, C.

    2000-01-01

    The objective of this paper is to present geostatistical models applied to the spatial characterisation of air pollution phenomena. A concise presentation of the geostatistical methodologies is illustrated with practical examples. The case study was conducted in an underground copper-mine located on the southern of Portugal, where a biomonitoring program using lichens has been implemented. Given the characteristics of lichens as indicators of air pollution it was possible to gather a great amount of data in space, which enabled the development and application of geostatistical methodologies. The advantages of using geostatistical models compared with deterministic models, as environmental control tools, are highlighted. (author)

  7. Application of geostatistics in Beach Placer

    International Nuclear Information System (INIS)

    Sundar, G.

    2016-01-01

    The goal of Geostatistics is in the prediction of possible spatial distribution of a property. Application of Geostatistics has gained significance in the field of exploration, evaluation and mining. In the case of beach and inland placer sands exploration, geostatistics can be used in optimising the drill hole spacing, estimate resources of the total heavy minerals (THM), estimation on different grid pattern and grade - tonnage curves. Steps involved in a geostatistical study are exploratory data analysis, creation of experimental variogram, variogram model fitting, kriging and cross validation. Basic tools in geostatistics are variogram and kriging. Characteristics of a variogram are sill, range and nugget. There is a necessity for variogram model fitting prior to kriging. Commonly used variogram models are spherical, exponential and gaussian

  8. Geostatistical Characteristic of Space -Time Variation in Underground Water Selected Quality Parameters in Klodzko Water Intake Area (SW Part of Poland)

    Science.gov (United States)

    Namysłowska-Wilczyńska, Barbara

    2016-04-01

    . These data were subjected to spatial analyses using statistical and geostatistical methods. The evaluation of basic statistics of the investigated quality parameters, including their histograms of distributions, scatter diagrams between these parameters and also correlation coefficients r were presented in this article. The directional semivariogram function and the ordinary (block) kriging procedure were used to build the 3D geostatistical model. The geostatistical parameters of the theoretical models of directional semivariograms of the studied water quality parameters, calculated along the time interval and along the wells depth (taking into account the terrain elevation), were used in the ordinary (block) kriging estimation. The obtained results of estimation, i.e. block diagrams allowed to determine the levels of increased values Z* of studied underground water quality parameters. Analysis of the variability in the selected quality parameters of underground water for an analyzed area in Klodzko water intake was enriched by referring to the results of geostatistical studies carried out for underground water quality parameters and also for a treated water and in Klodzko water supply system (iron Fe, manganese Mn, ammonium ion NH4+ contents), discussed in earlier works. Spatial and time variation in the latter-mentioned parameters was analysed on the basis of the data (2007÷2011, 2008÷2011). Generally, the behaviour of the underground water quality parameters has been found to vary in space and time. Thanks to the spatial analyses of the variation in the quality parameters in the Kłodzko underground water intake area some regularities (trends) in the variation in water quality have been identified.

  9. A practical primer on geostatistics

    Science.gov (United States)

    Olea, Ricardo A.

    2009-01-01

    The Challenge—Most geological phenomena are extraordinarily complex in their interrelationships and vast in their geographical extension. Ordinarily, engineers and geoscientists are faced with corporate or scientific requirements to properly prepare geological models with measurements involving a small fraction of the entire area or volume of interest. Exact description of a system such as an oil reservoir is neither feasible nor economically possible. The results are necessarily uncertain. Note that the uncertainty is not an intrinsic property of the systems; it is the result of incomplete knowledge by the observer.The Aim of Geostatistics—The main objective of geostatistics is the characterization of spatial systems that are incompletely known, systems that are common in geology. A key difference from classical statistics is that geostatistics uses the sampling location of every measurement. Unless the measurements show spatial correlation, the application of geostatistics is pointless. Ordinarily the need for additional knowledge goes beyond a few points, which explains the display of results graphically as fishnet plots, block diagrams, and maps.Geostatistical Methods—Geostatistics is a collection of numerical techniques for the characterization of spatial attributes using primarily two tools: probabilistic models, which are used for spatial data in a manner similar to the way in which time-series analysis characterizes temporal data, or pattern recognition techniques. The probabilistic models are used as a way to handle uncertainty in results away from sampling locations, making a radical departure from alternative approaches like inverse distance estimation methods.Differences with Time Series—On dealing with time-series analysis, users frequently concentrate their attention on extrapolations for making forecasts. Although users of geostatistics may be interested in extrapolation, the methods work at their best interpolating. This simple difference

  10. Saving time in a space-efficient simulation algorithm

    NARCIS (Netherlands)

    Markovski, J.

    2011-01-01

    We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the

  11. Geostatistical evaluation of travel time uncertainties

    International Nuclear Information System (INIS)

    Devary, J.L.

    1983-08-01

    Data on potentiometric head and hydraulic conductivity, gathered from the Wolfcamp Formation of the Permian System, have exhibited tremendous spatial variability as a result of heterogeneities in the media and the presence of petroleum and natural gas deposits. Geostatistical data analysis and error propagation techniques (kriging and conditional simulation) were applied to determine the effect of potentiometric head uncertainties on radionuclide travel paths and travel times through the Wolfcamp Formation. Blok-average kriging was utilized to remove measurement error from potentiometric head data. The travel time calculations have been enhanced by the use of an inverse technique to determine the relative hydraulic conductivity along travel paths. In this way, the spatial variability of the hydraulic conductivity corresponding to streamline convergence and divergence may be included in the analysis. 22 references, 11 figures, 1 table

  12. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  13. Stochastic simulation of time-series models combined with geostatistics to predict water-table scenarios in a Guarani Aquifer System outcrop area, Brazil

    Science.gov (United States)

    Manzione, Rodrigo L.; Wendland, Edson; Tanikawa, Diego H.

    2012-11-01

    Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

  14. The Use of Geostatistics in the Study of Floral Phenology of Vulpia geniculata (L. Link

    Directory of Open Access Journals (Sweden)

    Eduardo J. León Ruiz

    2012-01-01

    Full Text Available Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L. Link throughout the study area during sampling season. Ten sampling points, scattered troughout the city and low mountains in the “Sierra de Córdoba” were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to ellaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps.

  15. Introduction to Geostatistics

    Science.gov (United States)

    Kitanidis, P. K.

    1997-05-01

    Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.

  16. Modern space/time geostatistics using river distances: data integration of turbidity and E. coli measurements to assess fecal contamination along the Raritan River in New Jersey.

    Science.gov (United States)

    Money, Eric S; Carter, Gail P; Serre, Marc L

    2009-05-15

    Escherichia coli (E. coli) is a widely used indicator of fecal contamination in water bodies. External contact and subsequent ingestion of bacteria coming from fecal contamination can lead to harmful health effects. Since E. coli data are sometimes limited, the objective of this study is to use secondary information in the form of turbidity to improve the assessment of E. coli at unmonitored locations. We obtained all E. coli and turbidity monitoring data available from existing monitoring networks for the 2000-2006 time period for the Raritan River Basin, New Jersey. Using collocated measurements, we developed a predictive model of E. coli from turbidity data. Using this model, soft data are constructed for E. coli given turbidity measurements at 739 space/time locations where only turbidity was measured. Finally, the Bayesian Maximum Entropy (BME) method of modern space/time geostatistics was used for the data integration of monitored and predicted E. coli data to produce maps showing E. coli concentration estimated daily across the river basin. The addition of soft data in conjunction with the use of river distances reduced estimation error by about 30%. Furthermore, based on these maps, up to 35% of river miles in the Raritan Basin had a probability of E coli impairment greater than 90% on the most polluted day of the study period.

  17. A Bayesian spatio-temporal geostatistical model with an auxiliary lattice for large datasets

    KAUST Repository

    Xu, Ganggang

    2015-01-01

    When spatio-temporal datasets are large, the computational burden can lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field (GMRF) while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order O(n) with n the total number of observations in space and time, our method can be used to handle very large spatio-temporal datasets with reasonable CPU times. The performance of the proposed model is illustrated using simulation studies and a dataset of precipitation data from the coterminous United States.

  18. Geostatistics - bloodhound of uranium exploration

    International Nuclear Information System (INIS)

    David, Michel

    1979-01-01

    Geostatistics makes possible the efficient use of the information contained in core samples obtained by diamond drilling. The probability that a core represents the true content of a deposit, and the likely content of an orebody between two core samples can both be estimated using geostatistical methods. A confidence interval can be given for the mean grade of a deposit. The use of a computer is essential in the calculation of the continuity function, the variogram, when as many as 800,000 core samples may be involved. The results may be used to determine where additional samples need to be taken, and to develop a picture of the probable grades throughout the deposit. The basic mathematical model is about 15 years old, but applications to different types of deposit require various adaptations. The Ecole Polytechnique is currently developing methods for uranium deposits. (LL)

  19. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  20. 4th International Geostatistics Congress

    CERN Document Server

    1993-01-01

    The contributions in this book were presented at the Fourth International Geostatistics Congress held in Tróia, Portugal, in September 1992. They provide a comprehensive account of the current state of the art of geostatistics, including recent theoretical developments and new applications. In particular, readers will find descriptions and applications of the more recent methods of stochastic simulation together with data integration techniques applied to the modelling of hydrocabon reservoirs. In other fields there are stationary and non-stationary geostatistical applications to geology, climatology, pollution control, soil science, hydrology and human sciences. The papers also provide an insight into new trends in geostatistics particularly the increasing interaction with many other scientific disciplines. This book is a significant reference work for practitioners of geostatistics both in academia and industry.

  1. 7th International Geostatistics Congress

    CERN Document Server

    Deutsch, Clayton

    2005-01-01

    The conference proceedings consist of approximately 120 technical papers presented at the Seventh International Geostatistics Congress held in Banff, Alberta, Canada in 2004. All the papers were reviewed by an international panel of leading geostatisticians. The five major sections are: theory, mining, petroleum, environmental and other applications. The first section showcases new and innovative ideas in the theoretical development of geostatistics as a whole; these ideas will have large impact on (1) the directions of future geostatistical research, and (2) the conventional approaches to heterogeneity modelling in a wide range of natural resource industries. The next four sections are focused on applications and innovations relating to the use of geostatistics in specific industries. Historically, mining, petroleum and environmental industries have embraced the use of geostatistics for uncertainty characterization, so these three industries are identified as major application areas. The last section is open...

  2. A geostatistical analysis of geostatistics

    NARCIS (Netherlands)

    Hengl, T.; Minasny, B.; Gould, M.

    2009-01-01

    The bibliometric indices of the scientific field of geostatistics were analyzed using statistical and spatial data analysis. The publications and their citation statistics were obtained from the Web of Science (4000 most relevant), Scopus (2000 most relevant) and Google Scholar (5389). The focus was

  3. 10th International Geostatistics Congress

    CERN Document Server

    Rodrigo-Ilarri, Javier; Rodrigo-Clavero, María; Cassiraga, Eduardo; Vargas-Guzmán, José

    2017-01-01

    This book contains selected contributions presented at the 10th International Geostatistics Congress held in Valencia from 5 to 9 September, 2016. This is a quadrennial congress that serves as the meeting point for any engineer, professional, practitioner or scientist working in geostatistics. The book contains carefully reviewed papers on geostatistical theory and applications in fields such as mining engineering, petroleum engineering, environmental science, hydrology, ecology, and other fields.

  4. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    Science.gov (United States)

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  5. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    Science.gov (United States)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  6. Integration of GIS, Geostatistics, and 3-D Technology to Assess the Spatial Distribution of Soil Moisture

    Science.gov (United States)

    Betts, M.; Tsegaye, T.; Tadesse, W.; Coleman, T. L.; Fahsi, A.

    1998-01-01

    The spatial and temporal distribution of near surface soil moisture is of fundamental importance to many physical, biological, biogeochemical, and hydrological processes. However, knowledge of these space-time dynamics and the processes which control them remains unclear. The integration of geographic information systems (GIS) and geostatistics together promise a simple mechanism to evaluate and display the spatial and temporal distribution of this vital hydrologic and physical variable. Therefore, this research demonstrates the use of geostatistics and GIS to predict and display soil moisture distribution under vegetated and non-vegetated plots. The research was conducted at the Winfred Thomas Agricultural Experiment Station (WTAES), Hazel Green, Alabama. Soil moisture measurement were done on a 10 by 10 m grid from tall fescue grass (GR), alfalfa (AA), bare rough (BR), and bare smooth (BS) plots. Results indicated that variance associated with soil moisture was higher for vegetated plots than non-vegetated plots. The presence of vegetation in general contributed to the spatial variability of soil moisture. Integration of geostatistics and GIS can improve the productivity of farm lands and the precision of farming.

  7. Reducing uncertainty in geostatistical description with well testing pressure data

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A.C.; He, Nanqun [Univ. of Tulsa, OK (United States); Oliver, D.S. [Chevron Petroleum Technology Company, La Habra, CA (United States)

    1997-08-01

    Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.

  8. Time and Space Efficient Algorithms for Two-Party Authenticated Data Structures

    Science.gov (United States)

    Papamanthou, Charalampos; Tamassia, Roberto

    Authentication is increasingly relevant to data management. Data is being outsourced to untrusted servers and clients want to securely update and query their data. For example, in database outsourcing, a client's database is stored and maintained by an untrusted server. Also, in simple storage systems, clients can store very large amounts of data but at the same time, they want to assure their integrity when they retrieve them. In this paper, we present a model and protocol for two-party authentication of data structures. Namely, a client outsources its data structure and verifies that the answers to the queries have not been tampered with. We provide efficient algorithms to securely outsource a skip list with logarithmic time overhead at the server and client and logarithmic communication cost, thus providing an efficient authentication primitive for outsourced data, both structured (e.g., relational databases) and semi-structured (e.g., XML documents). In our technique, the client stores only a constant amount of space, which is optimal. Our two-party authentication framework can be deployed on top of existing storage applications, thus providing an efficient authentication service. Finally, we present experimental results that demonstrate the practical efficiency and scalability of our scheme.

  9. Geostatistical investigations of rock masses

    International Nuclear Information System (INIS)

    Matar, J.A.; Sarquis, M.A.; Girardi, J.P.; Tabbia, G.H.

    1987-01-01

    The geostatistical tehniques applied for the selection of a minimun fracturation volume in Sierra del Medio allow to quantify and qualify the variability of mechanic characteristics and density of fracture and also the level of reliability in estimations. The role of geostatistics is discussed in this work so as to select minimun fracturation blocks as a very important site selection step. The only variable used is the 'jointing density' so as to detect the principal fracture systems affecting the rocky massif. It was used on the semivariograms corresponding to the previously mentioned regionalized variables. The different results of fracturation are compared with the deep and shallow geological survey to obtain two and three dimensional models. The range of the geostatistical techniques to detect local geological phenomena such as faults is discussed. The variability model obtained from the borehole data computations is investigated taking as basis the vertical Columnar Model of Discontinuity (fractures) hypothesis derived from geological studies about spatial behaviour of the joint systems and from geostatistical interpretation. (Author) [es

  10. Time-Space Trade-Offs

    DEFF Research Database (Denmark)

    Pagter, Jakob Illeborg

    . The area of time-space trade-offs deals with both upper and lower bounds and both are interesting, theoretically as well as practically. The viewpoint of this dissertation is theoretical, but we believe that some of our results can find applications in practice as well. The last four years has witnessed...... perspective hierarchical memory layout models are the most interesting. Such models are called external memory models, in contrast to the internal memory models discussed above. Despite the fact that space might be of great relevance when solving practical problems on real computers, no theoretical model...... capturing space (and time simultaneously) has been defined. We introduce such a model and use it to prove so-called IOspace trade-offs for Sorting. Building on the abovementioned techniques for time-space efficient internal memory Sorting, we develop the first IO-space efficient external memory Sorting...

  11. Geostatistical prediction of microbial water quality throughout a stream network using meteorology, land cover, and spatiotemporal autocorrelation.

    Science.gov (United States)

    Holcomb, David Andrew; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-11

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modelled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was >90%, 10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  12. Computational system for geostatistical analysis

    Directory of Open Access Journals (Sweden)

    Vendrusculo Laurimar Gonçalves

    2004-01-01

    Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.

  13. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  14. Evaluating the effect of sampling and spatial correlation on ground-water travel time uncertainty coupling geostatistical, stochastic, and first order, second moment methods

    International Nuclear Information System (INIS)

    Andrews, R.W.; LaVenue, A.M.; McNeish, J.A.

    1989-01-01

    Ground-water travel time predictions at potential high-level waste repositories are subject to a degree of uncertainty due to the scale of averaging incorporated in conceptual models of the ground-water flow regime as well as the lack of data on the spatial variability of the hydrogeologic parameters. The present study describes the effect of limited observations of a spatially correlated permeability field on the predicted ground-water travel time uncertainty. Varying permeability correlation lengths have been used to investigate the importance of this geostatistical property on the tails of the travel time distribution. This study uses both geostatistical and differential analysis techniques. Following the generation of a spatially correlated permeability field which is considered reality, semivariogram analyses are performed upon small random subsets of the generated field to determine the geostatistical properties of the field represented by the observations. Kriging is then employed to generate a kriged permeability field and the corresponding standard deviation of the estimated field conditioned by the limited observations. Using both the real and kriged fields, the ground-water flow regime is simulated and ground-water travel paths and travel times are determined for various starting points. These results are used to define the ground-water travel time uncertainty due to path variability. The variance of the ground-water travel time along particular paths due to the variance of the permeability field estimated using kriging is then calculated using the first order, second moment method. The uncertainties in predicted travel time due to path and parameter uncertainties are then combined into a single distribution

  15. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  16. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  17. 2nd European Conference on Geostatistics for Environmental Applications

    CERN Document Server

    Soares, Amílcar; Froidevaux, Roland

    1999-01-01

    The Second European Conference on Geostatistics for Environmental Ap­ plications took place in Valencia, November 18-20, 1998. Two years have past from the first meeting in Lisbon and the geostatistical community has kept active in the environmental field. In these days of congress inflation, we feel that continuity can only be achieved by ensuring quality in the papers. For this reason, all papers in the book have been reviewed by, at least, two referees, and care has been taken to ensure that the reviewer comments have been incorporated in the final version of the manuscript. We are thankful to the members of the scientific committee for their timely review of the scripts. All in all, there are three keynote papers from experts in soil science, climatology and ecology and 43 contributed papers providing a good indication of the status of geostatistics as applied in the environ­ mental field all over the world. We feel now confident that the geoENV conference series, seeded around a coffee table almost six...

  18. Geostatistical Borehole Image-Based Mapping of Karst-Carbonate Aquifer Pores.

    Science.gov (United States)

    Sukop, Michael C; Cunningham, Kevin J

    2016-03-01

    Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes. © 2015, National Ground Water Association.

  19. Using river distance and existing hydrography data can improve the geostatistical estimation of fish tissue mercury at unsampled locations.

    Science.gov (United States)

    Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L

    2011-09-15

    Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.

  20. The application of geostatistics in erosion hazard mapping

    NARCIS (Netherlands)

    Beurden, S.A.H.A. van; Riezebos, H.Th.

    1988-01-01

    Geostatistical interpolation or kriging of soil and vegetation variables has become an important alternative to other mapping techniques. Although a reconnaissance sampling is necessary and basic requirements of geostatistics have to be met, kriging has the advantage of giving estimates with a

  1. Validating spatial structure in canopy water content using geostatistics

    Science.gov (United States)

    Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.

    1995-01-01

    Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.

  2. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  3. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    Science.gov (United States)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property

  4. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  5. A general parallelization strategy for random path based geostatistical simulation methods

    Science.gov (United States)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  6. Topsoil moisture mapping using geostatistical techniques under different Mediterranean climatic conditions.

    Science.gov (United States)

    Martínez-Murillo, J F; Hueso-González, P; Ruiz-Sinoga, J D

    2017-10-01

    Soil mapping has been considered as an important factor in the widening of Soil Science and giving response to many different environmental questions. Geostatistical techniques, through kriging and co-kriging techniques, have made possible to improve the understanding of eco-geomorphologic variables, e.g., soil moisture. This study is focused on mapping of topsoil moisture using geostatistical techniques under different Mediterranean climatic conditions (humid, dry and semiarid) in three small watersheds and considering topography and soil properties as key factors. A Digital Elevation Model (DEM) with a resolution of 1×1m was derived from a topographical survey as well as soils were sampled to analyzed soil properties controlling topsoil moisture, which was measured during 4-years. Afterwards, some topography attributes were derived from the DEM, the soil properties analyzed in laboratory, and the topsoil moisture was modeled for the entire watersheds applying three geostatistical techniques: i) ordinary kriging; ii) co-kriging considering as co-variate topography attributes; and iii) co-kriging ta considering as co-variates topography attributes and gravel content. The results indicated topsoil moisture was more accurately mapped in the dry and semiarid watersheds when co-kriging procedure was performed. The study is a contribution to improve the efficiency and accuracy of studies about the Mediterranean eco-geomorphologic system and soil hydrology in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Delineating Hydrofacies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics

    Energy Technology Data Exchange (ETDEWEB)

    Song, Xuehang [Florida State Univ., Tallahassee, FL (United States); Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Xingyuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ye, Ming [Florida State Univ., Tallahassee, FL (United States); Dai, Zhenxue [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data. Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.

  8. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  9. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    Science.gov (United States)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the

  10. Spatial distribution of Munida intermedia and M. sarsi (crustacea: Anomura) on the Galician continental shelf (NW Spain): Application of geostatistical analysis

    Science.gov (United States)

    Freire, J.; González-Gurriarán, E.; Olaso, I.

    1992-12-01

    Geostatistical methodology was used to analyse spatial structure and distribution of the epibenthic crustaceans Munida intermedia and M. sarsi within sets of data which had been collected during three survey cruises carried out on the Galician continental shelf (1983 and 1984). This study investigates the feasibility of using geostatistics for data collected according to traditional methods and of enhancing such methodology. The experimental variograms were calculated (pooled variance minus spatial covariance between samples taken one pair at a time vs. distance) and fitted to a 'spherical' model. The spatial structure model was used to estimate the abundance and distribution of the populations studied using the technique of kriging. The species display spatial structures, which are well marked during high density periods and in some areas (especially northern shelf). Geostatistical analysis allows identification of the density gradients in space as well as the patch grain along the continental shelf of 16-25 km diameter for M. intermedia and 12-20 km for M. sarsi. Patches of both species have a consistent location throughout the different cruises. As in other geographical areas, M. intermedia and M. sarsi usually appear at depths ranging from 200 to 500 m, with the highest densities in the continental shelf area located between Fisterra and Estaca de Bares. Althouh sampling was not originally designed specifically for geostatistics, this assay provides a measurement of spatial covariance, and shows variograms with variable structure depending on population density and geographical area. These ideas are useful in improving the design of future sampling cruises.

  11. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  12. Preliminary evaluation of uranium deposits. A geostatistical study of drilling density in Wyoming solution fronts

    International Nuclear Information System (INIS)

    Sandefur, R.L.; Grant, D.C.

    1976-01-01

    Studies of a roll-front uranium deposit in Shirley Basin Wyoming indicate that preliminary evaluation of the reserve potential of an ore body is possible with less drilling than currently practiced in industry. Estimating ore reserves from sparse drilling is difficult because most reserve calculation techniques do not give the accuracy of the estimate. A study of several deposits with a variety of drilling densities shows that geostatistics consistently provides a method of assessing the accuracy of an ore reserve estimate. Geostatistics provides the geologist with an additional descriptive technique - one which is valuable in the economic assessment of a uranium deposit. Closely spaced drilling on past properties provides both geological and geometric insight into the occurrence of uranium in roll-front type deposits. Just as the geological insight assists in locating new ore bodies and siting preferential drill locations, the geometric insight can be applied mathematically to evaluate the accuracy of a new ore reserve estimate. By expressing the geometry in numerical terms, geostatistics extracts important geological characteristics and uses this information to aid in describing the unknown characteristics of a property. (author)

  13. Space-Time Crystal and Space-Time Group.

    Science.gov (United States)

    Xu, Shenglong; Wu, Congjun

    2018-03-02

    Crystal structures and the Bloch theorem play a fundamental role in condensed matter physics. We extend the static crystal to the dynamic "space-time" crystal characterized by the general intertwined space-time periodicities in D+1 dimensions, which include both the static crystal and the Floquet crystal as special cases. A new group structure dubbed a "space-time" group is constructed to describe the discrete symmetries of a space-time crystal. Compared to space and magnetic groups, the space-time group is augmented by "time-screw" rotations and "time-glide" reflections involving fractional translations along the time direction. A complete classification of the 13 space-time groups in one-plus-one dimensions (1+1D) is performed. The Kramers-type degeneracy can arise from the glide time-reversal symmetry without the half-integer spinor structure, which constrains the winding number patterns of spectral dispersions. In 2+1D, nonsymmorphic space-time symmetries enforce spectral degeneracies, leading to protected Floquet semimetal states. We provide a general framework for further studying topological properties of the (D+1)-dimensional space-time crystal.

  14. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  15. Geostatistical Study of Precipitation on the Island of Crete

    Science.gov (United States)

    Agou, Vasiliki D.; Varouchakis, Emmanouil A.; Hristopulos, Dionissios T.

    2015-04-01

    precipitation which are fitted locally to a three-parameter probability distribution, based on which a normalized index is derived. We use the Spartan variogram function to model space-time correlations, because it is more flexible than classical models [3]. The performance of the variogram model is tested by means of leave-one-out cross validation. The variogram model is then used in connection with ordinary kriging to generate precipitation maps for the entire island. In the future, we will explore the joint spatiotemporal evolution of precipitation patterns on Crete. References [1] P. Goovaerts. Geostatistical approaches for incorporating elevation into the spatial interpolation of precipitation. Journal of Hydrology, 228(1):113-129, 2000. [2] N. B. Guttman. Accepting the standardized precipitation index: a calculation algorithm. American Water Resource Association, 35(2):311-322, 1999. [3] D. T Hristopulos. Spartan Gibbs random field models for geostatistical applications. SIAM Journal on Scientific Computing, 24(6):2125-2162, 2003. [4] A.G. Koutroulis, A.-E.K. Vrohidou, and I.K. Tsanis. Spatiotemporal characteristics of meteorological drought for the island of Crete. Journal of Hydrometeorology, 12(2):206-226, 2011. [5] T. B. McKee, N. J. Doesken, and J. Kleist. The relationship of drought frequency and duration to time scales. In Proceedings of the 8th Conference on Applied Climatology, page 179-184, Anaheim, California, 1993.

  16. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    Science.gov (United States)

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free

  17. Industrial experience feedback of a geostatistical estimation of contaminated soil volumes - 59181

    International Nuclear Information System (INIS)

    Faucheux, Claire; Jeannee, Nicolas

    2012-01-01

    Geo-statistics meets a growing interest for the remediation forecast of potentially contaminated sites, by providing adapted methods to perform both chemical and radiological pollution mapping, to estimate contaminated volumes, potentially integrating auxiliary information, and to set up adaptive sampling strategies. As part of demonstration studies carried out for GeoSiPol (Geo-statistics for Polluted Sites), geo-statistics has been applied for the detailed diagnosis of a former oil depot in France. The ability within the geo-statistical framework to generate pessimistic / probable / optimistic scenarios for the contaminated volumes allows a quantification of the risks associated to the remediation process: e.g. the financial risk to excavate clean soils, the sanitary risk to leave contaminated soils in place. After a first mapping, an iterative approach leads to collect additional samples in areas previously identified as highly uncertain. Estimated volumes are then updated and compared to the volumes actually excavated. This benchmarking therefore provides a practical feedback on the performance of the geo-statistical methodology. (authors)

  18. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    Science.gov (United States)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  19. Bayesian Analysis of Geostatistical Models With an Auxiliary Lattice

    KAUST Repository

    Park, Jincheol

    2012-04-01

    The Gaussian geostatistical model has been widely used for modeling spatial data. However, this model suffers from a severe difficulty in computation: it requires users to invert a large covariance matrix. This is infeasible when the number of observations is large. In this article, we propose an auxiliary lattice-based approach for tackling this difficulty. By introducing an auxiliary lattice to the space of observations and defining a Gaussian Markov random field on the auxiliary lattice, our model completely avoids the requirement of matrix inversion. It is remarkable that the computational complexity of our method is only O(n), where n is the number of observations. Hence, our method can be applied to very large datasets with reasonable computational (CPU) times. The numerical results indicate that our model can approximate Gaussian random fields very well in terms of predictions, even for those with long correlation lengths. For real data examples, our model can generally outperform conventional Gaussian random field models in both prediction errors and CPU times. Supplemental materials for the article are available online. © 2012 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  20. Use of geostatistics for remediation planning to transcend urban political boundaries

    International Nuclear Information System (INIS)

    Milillo, Tammy M.; Sinha, Gaurav; Gardella, Joseph A.

    2012-01-01

    Soil remediation plans are often dictated by areas of jurisdiction or property lines instead of scientific information. This study exemplifies how geostatistically interpolated surfaces can substantially improve remediation planning. Ordinary kriging, ordinary co-kriging, and inverse distance weighting spatial interpolation methods were compared for analyzing surface and sub-surface soil sample data originally collected by the US EPA and researchers at the University at Buffalo in Hickory Woods, an industrial–residential neighborhood in Buffalo, NY, where both lead and arsenic contamination is present. Past clean-up efforts estimated contamination levels from point samples, but parcel and agency jurisdiction boundaries were used to define remediation sites, rather than geostatistical models estimating the spatial behavior of the contaminants in the soil. Residents were understandably dissatisfied with the arbitrariness of the remediation plan. In this study we show how geostatistical mapping and participatory assessment can make soil remediation scientifically defensible, socially acceptable, and economically feasible. - Highlights: ► Point samples and property boundaries do not appropriately determine the extent of soil contamination. ► Kriging and co-kriging provide best concentration estimates for mapping soil contamination and refining clean-up sites. ► Maps provide a visual representation of geostatistical results to communities to aid in geostatistical decision making. ► Incorporating community input into the assessment of neighborhoods is good public policy practice. - Using geostatistical interpolation and mapping results to involve the affected community can substantially improve remediation planning and promote its long-term effectiveness.

  1. Radiation hardened high efficiency silicon space solar cell

    International Nuclear Information System (INIS)

    Garboushian, V.; Yoon, S.; Turner, J.

    1993-01-01

    A silicon solar cell with AMO 19% Beginning of Life (BOL) efficiency is reported. The cell has demonstrated equal or better radiation resistance when compared to conventional silicon space solar cells. Conventional silicon space solar cell performance is generally ∼ 14% at BOL. The Radiation Hardened High Efficiency Silicon (RHHES) cell is thinned for high specific power (watts/kilogram). The RHHES space cell provides compatibility with automatic surface mounting technology. The cells can be easily combined to provide desired power levels and voltages. The RHHES space cell is more resistant to mechanical damage due to micrometeorites. Micro-meteorites which impinge upon conventional cells can crack the cell which, in turn, may cause string failure. The RHHES, operating in the same environment, can continue to function with a similar crack. The RHHES cell allows for very efficient thermal management which is essential for space cells generating higher specific power levels. The cell eliminates the need for electrical insulation layers which would otherwise increase the thermal resistance for conventional space panels. The RHHES cell can be applied to a space concentrator panel system without abandoning any of the attributes discussed. The power handling capability of the RHHES cell is approximately five times more than conventional space concentrator solar cells

  2. A geostatistical methodology to assess the accuracy of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.

  3. A geostatistical methodology to assess the accuracy of unsaturated flow models

    International Nuclear Information System (INIS)

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error

  4. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  5. Geostatistics and GIS: tools for characterizing environmental contamination.

    Science.gov (United States)

    Henshaw, Shannon L; Curriero, Frank C; Shields, Timothy M; Glass, Gregory E; Strickland, Paul T; Breysse, Patrick N

    2004-08-01

    Geostatistics is a set of statistical techniques used in the analysis of georeferenced data that can be applied to environmental contamination and remediation studies. In this study, the 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (DDE) contamination at a Superfund site in western Maryland is evaluated. Concern about the site and its future clean up has triggered interest within the community because residential development surrounds the area. Spatial statistical methods, of which geostatistics is a subset, are becoming increasingly popular, in part due to the availability of geographic information system (GIS) software in a variety of application packages. In this article, the joint use of ArcGIS software and the R statistical computing environment are demonstrated as an approach for comprehensive geostatistical analyses. The spatial regression method, kriging, is used to provide predictions of DDE levels at unsampled locations both within the site and the surrounding areas where residential development is ongoing.

  6. Adaptive geostatistical sampling enables efficient identification of malaria hotspots in repeated cross-sectional surveys in rural Malawi.

    Directory of Open Access Journals (Sweden)

    Alinune N Kabaghe

    Full Text Available In the context of malaria elimination, interventions will need to target high burden areas to further reduce transmission. Current tools to monitor and report disease burden lack the capacity to continuously detect fine-scale spatial and temporal variations of disease distribution exhibited by malaria. These tools use random sampling techniques that are inefficient for capturing underlying heterogeneity while health facility data in resource-limited settings are inaccurate. Continuous community surveys of malaria burden provide real-time results of local spatio-temporal variation. Adaptive geostatistical design (AGD improves prediction of outcome of interest compared to current random sampling techniques. We present findings of continuous malaria prevalence surveys using an adaptive sampling design.We conducted repeated cross sectional surveys guided by an adaptive sampling design to monitor the prevalence of malaria parasitaemia and anaemia in children below five years old in the communities living around Majete Wildlife Reserve in Chikwawa district, Southern Malawi. AGD sampling uses previously collected data to sample new locations of high prediction variance or, where prediction exceeds a set threshold. We fitted a geostatistical model to predict malaria prevalence in the area.We conducted five rounds of sampling, and tested 876 children aged 6-59 months from 1377 households over a 12-month period. Malaria prevalence prediction maps showed spatial heterogeneity and presence of hotspots-where predicted malaria prevalence was above 30%; predictors of malaria included age, socio-economic status and ownership of insecticide-treated mosquito nets.Continuous malaria prevalence surveys using adaptive sampling increased malaria prevalence prediction accuracy. Results from the surveys were readily available after data collection. The tool can assist local managers to target malaria control interventions in areas with the greatest health impact and is

  7. Soil moisture estimation by assimilating L-band microwave brightness temperature with geostatistics and observation localization.

    Directory of Open Access Journals (Sweden)

    Xujun Han

    Full Text Available The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL; the other is observation localization (OL. Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects.

  8. Soil moisture estimation by assimilating L-band microwave brightness temperature with geostatistics and observation localization.

    Science.gov (United States)

    Han, Xujun; Li, Xin; Rigon, Riccardo; Jin, Rui; Endrizzi, Stefano

    2015-01-01

    The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL); the other is observation localization (OL). Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects.

  9. Geostatistics for radiological characterization: overview and application cases

    International Nuclear Information System (INIS)

    Desnoyers, Yvon

    2016-01-01

    The objective of radiological characterization is to find a suitable balance between gathering data (constrained by cost, deadlines, accessibility or radiation) and managing the issues (waste volumes, levels of activity or exposure). It is necessary to have enough information to have confidence in the results without multiplying useless data. Geo-statistics processing of data considers all available pieces of information: historical data, non-destructive measurements and laboratory analyses of samples. The spatial structure modelling is then used to produce maps and to estimate the extent of radioactive contamination (surface and depth). Quantifications of local and global uncertainties are powerful decision-making tools for better management of remediation projects at contaminated sites, and for decontamination and dismantling projects at nuclear facilities. They can be used to identify hot spots, estimate contamination of surfaces and volumes, classify radioactive waste according to thresholds, estimate source terms, and so on. The spatial structure of radioactive contamination makes the optimization of sampling (number and position of data points) particularly important. Geo-statistics methodology can help determine the initial mesh size and reduce estimation uncertainties. Several show cases are presented to illustrate why and how geo-statistics can be applied to a range of radiological characterization where investigated units can represent very small areas (a few m 2 or a few m 3 ) or very large sites (at a country scale). The focus is then put on experience gained over years in the use of geo-statistics and sampling optimization. (author)

  10. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  11. Improved Assimilation of Streamflow and Satellite Soil Moisture with the Evolutionary Particle Filter and Geostatistical Modeling

    Science.gov (United States)

    Yan, Hongxiang; Moradkhani, Hamid; Abbaszadeh, Peyman

    2017-04-01

    Assimilation of satellite soil moisture and streamflow data into hydrologic models using has received increasing attention over the past few years. Currently, these observations are increasingly used to improve the model streamflow and soil moisture predictions. However, the performance of this land data assimilation (DA) system still suffers from two limitations: 1) satellite data scarcity and quality; and 2) particle weight degeneration. In order to overcome these two limitations, we propose two possible solutions in this study. First, the general Gaussian geostatistical approach is proposed to overcome the limitation in the space/time resolution of satellite soil moisture products thus improving their accuracy at uncovered/biased grid cells. Secondly, an evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC, is developed to further reduce weight degeneration and improve the robustness of the land DA system. This study provides a detailed analysis of the joint and separate assimilation of streamflow and satellite soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed EPF-MCMC and the general Gaussian geostatistical approach. Performance is assessed over several basins in the USA selected from Model Parameter Estimation Experiment (MOPEX) and located in different climate regions. The results indicate that: 1) the general Gaussian approach can predict the soil moisture at uncovered grid cells within the expected satellite data quality threshold; 2) assimilation of satellite soil moisture inferred from the general Gaussian model can significantly improve the soil moisture predictions; and 3) in terms of both deterministic and probabilistic measures, the EPF-MCMC can achieve better streamflow predictions. These results recommend that the geostatistical model is a helpful tool to aid the remote sensing technique and the EPF-MCMC is a

  12. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    NARCIS (Netherlands)

    Leus, G.; Petré, F.; Moonen, M.

    2004-01-01

    In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input

  13. Multivariate analysis and geostatistics of the fertility of a humic rhodic hapludox under coffee cultivation

    Directory of Open Access Journals (Sweden)

    Samuel de Assis Silva

    2012-04-01

    Full Text Available The spatial variability of soil and plant properties exerts great influence on the yeld of agricultural crops. This study analyzed the spatial variability of the fertility of a Humic Rhodic Hapludox with Arabic coffee, using principal component analysis, cluster analysis and geostatistics in combination. The experiment was carried out in an area under Coffea arabica L., variety Catucai 20/15 - 479. The soil was sampled at a depth 0.20 m, at 50 points of a sampling grid. The following chemical properties were determined: P, K+, Ca2+, Mg2+, Na+, S, Al3+, pH, H + Al, SB, t, T, V, m, OM, Na saturation index (SSI, remaining phosphorus (P-rem, and micronutrients (Zn, Fe, Mn, Cu and B. The data were analyzed with descriptive statistics, followed by principal component and cluster analyses. Geostatistics were used to check and quantify the degree of spatial dependence of properties, represented by principal components. The principal component analysis allowed a dimensional reduction of the problem, providing interpretable components, with little information loss. Despite the characteristic information loss of principal component analysis, the combination of this technique with geostatistical analysis was efficient for the quantification and determination of the structure of spatial dependence of soil fertility. In general, the availability of soil mineral nutrients was low and the levels of acidity and exchangeable Al were high.

  14. 3D vadose zone modeling using geostatistical inferences

    International Nuclear Information System (INIS)

    Knutson, C.F.; Lee, C.B.

    1991-01-01

    In developing a 3D model of the 600 ft thick interbedded basalt and sediment complex that constitutes the vadose zone at the Radioactive Waste Management Complex (RWMC) at the Idaho National Engineering Laboratory (INEL) geostatistical data were captured for 12--15 parameters (e.g. permeability, porosity, saturation, etc. and flow height, flow width, flow internal zonation, etc.). This two scale data set was generated from studies of subsurface core and geophysical log suites at RWMC and from surface outcrop exposures located at the Box Canyon of the Big Lost River and from Hell's Half Acre lava field all located in the general RWMC area. Based on these currently available data, it is possible to build a 3D stochastic model that utilizes: cumulative distribution functions obtained from the geostatistical data; backstripping and rebuilding of stratigraphic units; an ''expert'' system that incorporates rules based on expert geologic analysis and experimentally derived geostatistics for providing: (a) a structural and isopach map of each layer, (b) a realization of the flow geometry of each basalt flow unit, and (c) a realization of the internal flow parameters (eg permeability, porosity, and saturation) for each flow. 10 refs., 4 figs., 1 tab

  15. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  16. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  17. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    Science.gov (United States)

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  18. Application of Geostatistical Modelling to Study the Exploration Adequacy of Uniaxial Compressive Strength of Intact Rock alongthe Behesht-Abad Tunnel Route

    Directory of Open Access Journals (Sweden)

    Mohammad Doustmohammadi

    2014-12-01

    Full Text Available Uniaxial compressive strength (UCS is one of the most significant factors on the stability of underground excavation projects. Most of the time, this factor can be obtained by exploratory boreholes evaluation. Due to the large distance between exploratory boreholes in the majority of geotechnical projects, the application of geostatistical methods has increased as an estimator of rock mass properties. The present paper ties the estimation of UCS values of intact rock to the distance between boreholes of the Behesht-Abad tunnel in central Iran, using SGEMS geostatistical program. Variography showed that UCS estimation of intact rock using geostatistical methods is reasonable. The model establishment and validation was done after assessment that the model was trustworthy. Cross validation proved the high accuracy (98% and reliability of the model to estimate uniaxial compressive strength. The UCS values were then estimated along the tunnel axis. Moreover, using geostatistical estimation led to better identification of the pros and cons of geotechnical explorations in each location of tunnel route.

  19. Geostatistical simulations for radon indoor with a nested model including the housing factor

    International Nuclear Information System (INIS)

    Cafaro, C.; Giovani, C.; Garavaglia, M.

    2016-01-01

    The radon prone areas definition is matter of many researches in radioecology, since radon is considered a leading cause of lung tumours, therefore the authorities ask for support to develop an appropriate sanitary prevention strategy. In this paper, we use geostatistical tools to elaborate a definition accounting for some of the available information about the dwellings. Co-kriging is the proper interpolator used in geostatistics to refine the predictions by using external covariates. In advance, co-kriging is not guaranteed to improve significantly the results obtained by applying the common lognormal kriging. Here, instead, such multivariate approach leads to reduce the cross-validation residual variance to an extent which is deemed as satisfying. Furthermore, with the application of Monte Carlo simulations, the paradigm provides a more conservative radon prone areas definition than the one previously made by lognormal kriging. - Highlights: • The housing class is inserted into co-kriging via an indicator function. • Inserting the housing classes in a co-kriging improves predictions. • The housing class has a structured component in space. • A nested model is implemented into the multigaussian algorithm. • A collection of risk maps is merged into one to create RPA.

  20. Foucauldian diagnostics: space, time, and the metaphysics of medicine.

    Science.gov (United States)

    Bishop, Jeffrey P

    2009-08-01

    This essay places Foucault's work into a philosophical context, recognizing that Foucault is difficult to place and demonstrates that Foucault remains in the Kantian tradition of philosophy, even if he sits at the margins of that tradition. For Kant, the forms of intuition-space and time-are the a priori conditions of the possibility of human experience and knowledge. For Foucault, the a priori conditions are political space and historical time. Foucault sees political space as central to understanding both the subject and objects of medicine, psychiatry, and the social sciences. Through this analysis one can see that medicine's metaphysics is a metaphysics of efficient causation, where medicine's objects are subjected to mechanisms of efficient control.

  1. Photon Differentials in Space and Time

    DEFF Research Database (Denmark)

    Schjøth, Lars; Frisvad, Jeppe Revall; Erleben, Kenny

    2011-01-01

    We present a novel photon mapping algorithm for animations. We extend our previous work on photon differentials [12] with time differentials. The result is a first order model of photon cones in space an time that effectively reduces the number of required photons per frame as well as efficiently...... reduces temporal aliasing without any need for in-between-frame photon maps....

  2. Geostatistical ore reserve estimation for a roll-front type uranium deposit (practitioner's guide)

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1977-01-01

    This report comprises two parts. Part I contains illustrative examples of each phase of a geostatistical study using a roll-front type uranium deposit. Part II contains five computer programs and comprehensive users' manuals for these programs which are necessary to make a practical geostatistical study

  3. A comparison between geostatistical analyses and sedimentological studies at the Hartbeestfontien gold mine

    International Nuclear Information System (INIS)

    Magri, E.J.

    1978-01-01

    For life-of-mine planning, as well as for short- and medium-term planning of grades and mine layouts, it is extremely important to have a clear understanding of the patterns followed by the distribution of gold and uranium within the mining area. This study is an attempt to reconcile the geostatistical approach to the determination of ore-shoot directions, via an analysis of the spatial distribution of gold and uranium values, with the sedimentological approach, which is based on the direct measurement of geological features. For the routine geostatistical estimation of ore reserves, the Hartebeestfontein gold mine was divided into ll sections. In each of these sections, the ore-shoot directions were calculated for gold and uranium from the anisotropies disclosed by geostatistical variogram analyses. This study presents a comparison of these results with those obtained from direct geological measurements of paleo-current directions. The results suggest that geological and geostatistical studies could be of significant mutual benefit [af

  4. Evaluation of geostatistical parameters based on well tests; Estimation de parametres geostatistiques a partir de tests de puits

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, Y.

    1997-10-20

    Geostatistical tools are increasingly used to model permeability fields in subsurface reservoirs, which are considered as a particular random variable development depending of several geostatistical parameters such as variance and correlation length. The first part of the thesis is devoted to the study of relations existing between the transient well pressure (the well test) and the stochastic permeability field, using the apparent permeability concept.The well test performs a moving permeability average over larger and larger volume with increasing time. In the second part, the geostatistical parameters are evaluated using well test data; a Bayesian framework is used and parameters are estimated using the maximum likelihood principle by maximizing the well test data probability density function with respect to these parameters. This method, involving a well test fast evaluation, provides an estimation of the correlation length and the variance over different realizations of a two-dimensional permeability field

  5. Space-time neutronic analysis of postulated LOCA's in CANDU reactors

    International Nuclear Information System (INIS)

    Luxat, J.C.; Frescura, G.M.

    1978-01-01

    Space-time neutronic behaviour of CANDU reactors is of importance in the analysis and design of reactor safety systems. A methodology has been developed for simulating CANDU space-time neutronics with application to the analysis of postulated LOCA'S. The approach involves the efficient use of a set of computer codes which provide a capability to perform simulations ranging from detailed, accurate 3-dimensional space-time to low-cost survey calculations using point kinetics with some ''effective'' spatial content. A new, space-time kinetics code based upon a modal expansion approach is described. This code provides an inexpensive and relatively accurate scoping tool for detailed 3-dimensional space-time simulations. (author)

  6. Geostatistical methodology for waste optimization of contaminated premises - 59344

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    The presented methodological study illustrates a Geo-statistical approach suitable for radiological evaluation in nuclear premises. The waste characterization is mainly focused on floor concrete surfaces. By modeling the spatial continuity of activities, Geo-statistics provide sound methods to estimate and map radiological activities, together with their uncertainty. The multivariate approach allows the integration of numerous surface radiation measurements in order to improve the estimation of activity levels from concrete samples. This way, a sequential and iterative investigation strategy proves to be relevant to fulfill the different evaluation objectives. Waste characterization is performed on risk maps rather than on direct interpolation maps (due to bias of the selection on kriging results). The use of several estimation supports (punctual, 1 m 2 , room) allows a relevant radiological waste categorization thanks to cost-benefit analysis according to the risk of exceeding a given activity threshold. Global results, mainly total activity, are similarly quantified to precociously lead the waste management for the dismantling and decommissioning project. This paper recalled the geo-statistics principles and demonstrated how this methodology provides innovative tools for the radiological evaluation of contaminated premises. The relevance of this approach relies on the presence of a spatial continuity for radiological contamination. In this case, geo-statistics provides reliable activity estimates, uncertainty quantification and risk analysis, which are essential decision-making tools for decommissioning and dismantling projects of nuclear installations. Waste characterization is then performed taking all relevant information into account: historical knowledge, surface measurements and samples. Thanks to the multivariate processing, the different investigation stages can be rationalized as regards quantity and positioning. Waste characterization is finally

  7. Technology demonstration: geostatistical and hydrologic analysis of salt areas. Assessment of effectiveness of geologic isolation systems

    International Nuclear Information System (INIS)

    Doctor, P.G.; Oberlander, P.L.; Rice, W.A.; Devary, J.L.; Nelson, R.W.; Tucker, P.E.

    1982-09-01

    The Office of Nuclear Waste Isolation (ONWI) requested Pacific Northwest Laboratory (PNL) to: (1) use geostatistical analyses to evaluate the adequacy of hydrologic data from three salt regions, each of which contains a potential nuclear waste repository site; and (2) demonstrate a methodology that allows quantification of the value of additional data collection. The three regions examined are the Paradox Basin in Utah, the Permian Basin in Texas, and the Mississippi Study Area. Additional and new data became available to ONWI during and following these analyses; therefore, this report must be considered a methodology demonstration here would apply as illustrated had the complete data sets been available. A combination of geostatistical and hydrologic analyses was used for this demonstration. Geostatistical analyses provided an optimal estimate of the potentiometric surface from the available data, a measure of the uncertainty of that estimate, and a means for selecting and evaluating the location of future data. The hydrologic analyses included the calculation of transmissivities, flow paths, travel times, and ground-water flow rates from hypothetical repository sites. Simulation techniques were used to evaluate the effect of optimally located future data on the potentiometric surface, flow lines, travel times, and flow rates. Data availability, quality, quantity, and conformance with model assumptions differed in each of the salt areas. Report highlights for the three locations are given

  8. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  9. Satellite Magnetic Residuals Investigated With Geostatistical Methods

    DEFF Research Database (Denmark)

    Fox Maule, Chaterine; Mosegaard, Klaus; Olsen, Nils

    2005-01-01

    (which consists of measurement errors and unmodeled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyze the residuals of the Oersted (09d/04) field model (www.dsri.dk/Oersted/Field models/IGRF 2005 candidates/), which is based...

  10. Space-time interdependence: evidence against asymmetric mapping between time and space.

    Science.gov (United States)

    Cai, Zhenguang G; Connell, Louise

    2015-03-01

    Time and space are intimately related, but what is the real nature of this relationship? Is time mapped metaphorically onto space such that effects are always asymmetric (i.e., space affects time more than time affects space)? Or do the two domains share a common representational format and have the ability to influence each other in a flexible manner (i.e., time can sometimes affect space more than vice versa)? In three experiments, we examined whether spatial representations from haptic perception, a modality of relatively low spatial acuity, would lead the effect of time on space to be substantially stronger than the effect of space on time. Participants touched (but could not see) physical sticks while listening to an auditory note, and then reproduced either the length of the stick or the duration of the note. Judgements of length were affected by concurrent stimulus duration, but not vice versa. When participants were allowed to see as well as touch the sticks, however, the higher acuity of visuohaptic perception caused the effects to converge so length and duration influenced each other to a similar extent. These findings run counter to the spatial metaphor account of time, and rather support the spatial representation account in which time and space share a common representational format and the directionality of space-time interaction depends on the perceptual acuity of the modality used to perceive space. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Massively Parallel Geostatistical Inversion of Coupled Processes in Heterogeneous Porous Media

    Science.gov (United States)

    Ngo, A.; Schwede, R. L.; Li, W.; Bastian, P.; Ippisch, O.; Cirpka, O. A.

    2012-04-01

    The quasi-linear geostatistical approach is an inversion scheme that can be used to estimate the spatial distribution of a heterogeneous hydraulic conductivity field. The estimated parameter field is considered to be a random variable that varies continuously in space, meets the measurements of dependent quantities (such as the hydraulic head, the concentration of a transported solute or its arrival time) and shows the required spatial correlation (described by certain variogram models). This is a method of conditioning a parameter field to observations. Upon discretization, this results in as many parameters as elements of the computational grid. For a full three dimensional representation of the heterogeneous subsurface it is hardly sufficient to work with resolutions (up to one million parameters) of the model domain that can be achieved on a serial computer. The forward problems to be solved within the inversion procedure consists of the elliptic steady-state groundwater flow equation and the formally elliptic but nearly hyperbolic steady-state advection-dominated solute transport equation in a heterogeneous porous medium. Both equations are discretized by Finite Element Methods (FEM) using fully scalable domain decomposition techniques. Whereas standard conforming FEM is sufficient for the flow equation, for the advection dominated transport equation, which rises well known numerical difficulties at sharp fronts or boundary layers, we use the streamline diffusion approach. The arising linear systems are solved using efficient iterative solvers with an AMG (algebraic multigrid) pre-conditioner. During each iteration step of the inversion scheme one needs to solve a multitude of forward and adjoint problems in order to calculate the sensitivities of each measurement and the related cross-covariance matrix of the unknown parameters and the observations. In order to reduce interprocess communications and to improve the scalability of the code on larger clusters

  12. The space-time model according to dimensional continuous space-time theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2014-01-01

    This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.

  13. Assessment and modeling of the groundwater hydrogeochemical quality parameters via geostatistical approaches

    Science.gov (United States)

    Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad

    2018-03-01

    Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.

  14. Application of multiple-point geostatistics to simulate the effect of small scale aquifer heterogeneity on the efficiency of Aquifer Thermal Energy Storage (ATES)

    Science.gov (United States)

    Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke

    2015-04-01

    Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for Aquifer Thermal Energy Storage (ATES) systems and wells. Recent model studies indicate that meter scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In this paper, the influence of centimeter scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3 - 3.6%) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6 - 10.2%) on the energy output of the ATES system. It is concluded that it is important to incorporate small scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.

  15. Application of multiple-point geostatistics to simulate the effect of small-scale aquifer heterogeneity on the efficiency of aquifer thermal energy storage

    Science.gov (United States)

    Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke

    2015-08-01

    Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for aquifer thermal energy storage (ATES) systems and wells. Recent model studies indicate that meter-scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In a study site in Bierbeek, Belgium, the influence of centimeter-scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3-3.6 %) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6-10.2 %) on the energy output of the ATES system. It is concluded that it is important to incorporate small-scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.

  16. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox...... transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty...

  17. Constrained optimisation of spatial sampling : a geostatistical approach

    NARCIS (Netherlands)

    Groenigen, van J.W.

    1999-01-01

    Aims

    This thesis aims at the development of optimal sampling strategies for geostatistical studies. Special emphasis is on the optimal use of ancillary data, such as co-related imagery, preliminary observations and historic knowledge. Although the object of all studies

  18. Geostatistical interpolation for modelling SPT data in northern Izmir

    Indian Academy of Sciences (India)

    data scatter' stems from the natural randomness of the system under con- ... Geostatistical methods were originally used for ore reserve calculations by the ... ing grain size distribution, plasticity, strength parameters and water content, for ...

  19. Seismic forecast using geostatistics

    International Nuclear Information System (INIS)

    Grecu, Valeriu; Mateiciuc, Doru

    2007-01-01

    The main idea of this research direction consists in the special way of constructing a new type of mathematical function as being a correlation between a computed statistical quantity and another physical quantity. This type of function called 'position function' was taken over by the authors of this study in the field of seismology with the hope of solving - at least partially - the difficult problem of seismic forecast. The geostatistic method of analysis focuses on the process of energy accumulation in a given seismic area, completing this analysis by a so-called loading function. This function - in fact a temporal function - describes the process of energy accumulation during a seismic cycle from a given seismic area. It was possible to discover a law of evolution of the seismic cycles that was materialized in a so-called characteristic function. This special function will help us to forecast the magnitude and the occurrence moment of the largest earthquake in the analysed area. Since 2000, the authors have been evolving to a new stage of testing: real - time analysis, in order to verify the quality of the method. There were five large earthquakes forecasts. (authors)

  20. Memory Efficient Data Structures for Explicit Verification of Timed Systems

    DEFF Research Database (Denmark)

    Taankvist, Jakob Haahr; Srba, Jiri; Larsen, Kim Guldstrand

    2014-01-01

    Timed analysis of real-time systems can be performed using continuous (symbolic) or discrete (explicit) techniques. The explicit state-space exploration can be considerably faster for models with moderately small constants, however, at the expense of high memory consumption. In the setting of timed......-arc Petri nets, we explore new data structures for lowering the used memory: PTries for efficient storing of configurations and time darts for semi-symbolic description of the state-space. Both methods are implemented as a part of the tool TAPAAL and the experiments document at least one order of magnitude...... of memory savings while preserving comparable verification times....

  1. Geostatistical and adjoint sensitivity techniques applied to a conceptual model of ground-water flow in the Paradox Basin, Utah

    International Nuclear Information System (INIS)

    Metcalfe, D.E.; Campbell, J.E.; RamaRao, B.S.; Harper, W.V.; Battelle Project Management Div., Columbus, OH)

    1985-01-01

    Sensitivity and uncertainty analysis are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of geostatistical and adjoint sensitivity techniques to aid in the calibration of an existing conceptual model of ground-water flow is demonstrated for the Leadville Limestone in Paradox Basin, Utah. The geostatistical method called kriging is used to statistically analyze the measured potentiometric data for the Leadville. This analysis consists of identifying anomalous data and data trends and characterizing the correlation structure between data points. Adjoint sensitivity analysis is then performed to aid in the calibration of a conceptual model of ground-water flow to the Leadville measured potentiometric data. Sensitivity derivatives of the fit between the modeled Leadville potentiometric surface and the measured potentiometric data to model parameters and boundary conditions are calculated by the adjoint method. These sensitivity derivatives are used to determine which model parameter and boundary condition values should be modified to most efficiently improve the fit of modeled to measured potentiometric conditions

  2. Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results

  3. Applicability of geostatistical methods and optimization of data for assessing hydraulic and geological conditions as a basis for remediation measures in the Ronneburg ore mining district

    International Nuclear Information System (INIS)

    Post, C.

    2001-01-01

    The remediation of the former Wismut mines in Thuringia has been planed and prepared since 1990. Objects of remediation are mines, tailing ponds and waste rock piles. Since more than 40 years of mining have had a great affect on the exploited aquifer, special emphasis is given to groundwater recharge so that minery-flooding is one of the conceivable remedial options. Controlled flooding supports minimising the expanded oxidation zone, which renders an immense pollutant potential, while at the same time the flooding reduces the quantity of acid mine water, that has to be treated. One of the main tasks of modelling the flooding progress is to determine and prognosticate the wateroutlet-places. Due to the inadequacy of the database from the production period, limited accuracy of the available data and because of the inherent uncertainty of approximations used in numerical modelling, a stochastic approach is prospected. The flooding predictions, i.e. modelling of hydrodynamical and hydrochemical conditions during and after completion of flooding predominantly depend on the spatial distribution of the hydraulic conductivity. In order to get a better understanding of the spatial heterogeneity of the Palaeozoic fractured rock aquifer, certain geostatistical interpolation methods are tested to achieve the best approach for describing the hydrogeological parameters in space. This work deals in detail with two selected geostatistical interpolation methods (ordinary and indicator kriging) and discusses their applicability and limitations including the application of the presented case. Another important target is the specification of the database and the improvement of consistency with statistical standards. The main emphasis lies on the spatial distribution of the measured hydraulic conductivity coefficient, its estimation at non-measured places and the influence of its spatial variability on modelling results. This topic is followed by the calculation of the estimation

  4. NCU-SWIP Space Weather Instrumentation Payload - Intelligent Sensors On Efficient Real-Time Distributed LUTOS

    Science.gov (United States)

    Yeh, Tse-Liang; Dmitriev, Alexei; Chu, Yen-Hsyang; Jiang, Shyh-Biau; Chen, Li-Wu

    The NCU-SWIP - Space Weather Instrumentation Payload is developed for simultaneous in-situ and remote measurement of space weather parameters for cross verifications. The measurements include in-situ electron density, electron temperature, magnetic field, the deceleration of satellite due to neutral wind, and remotely the linear cumulative intensities of oxygen ion air-glows at 135.6nm and 630.0nm along the flight path in forward, nader, and backward directions for tomographic reconstruction of the electron density distribution underneath. This instrument package is suitable for micro satellite constellation to establish nominal space weather profiles and, thus, to detect abnormal variations as the signs of ionospheric disturbances induced by severe atmospheric weather, or earth quake - mantle movement through their Lithosphere-Atmosphere-Ionosphere Coupling Mechanism. NCU-SWIP is constructed with intelligent sensor modules connected by common bus with their functionalities managed by an efficient distributed real-time system LUTOS. The same hierarchy can be applied to the level of satellite constellation. For example SWIP's in a constellation in coordination with the GNSS Occultation Experiment TriG planned for the Formosa-7 constellation, data can be cross correlated for verification and refinement for real-time, stable and reliable measurements. A SWIP will be contributed to the construction of a MAI Micro Satellite for verification. The SWIP consists of two separate modules: the SWIP main control module and the SWIP-PMTomo sensor module. They are respectively a 1.5kg W120xL120xH100 (in mm) box with forward facing 120mmPhi circular disk probe on a boom top edged at 470mm height and a 7.2kg W126xL590x372H (in mm) slab containing 3 legs looking downwards along the flight path, while consuming the maximum electricity of 10W and 12W. The sensors are 1) ETPEDP measuring 16bits floating potentials for electron temperature range of 1000K to 3000K and 24bits electron

  5. SOIL MOISTURE SPACE-TIME ANALYSIS TO SUPPORT IMPROVED CROP MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2015-02-01

    Full Text Available The knowledge of the water content in the soil profile is essential for an efficient management of crop growth and development. This work aimed to use geostatistical techniques in a spatio-temporal study of soil moisture in an Oxisol in order to provide that information for improved crop management. Data were collected in a coffee crop area at São Roque de Minas, in the upper São Francisco River basin, MG state, Brazil. The soil moisture was measured with a multi-sensor capacitance (MCP probe at 10-, 20-, 30-, 40-, 60- and 100-cm depths between March and December, 2010. After adjusting the spherical semivariogram model using ordinary least squares, best model, the values were interpolated by kriging in order to have a continuous surface relating depth x time (CSDT and the soil water availability to plant (SWAP. The results allowed additional insight on the dynamics of soil water and its availability to plant, and pointed to the effects of climate on the soil water content. These results also allowed identifying when and where there was greater water consumption by the plants, and the soil layers where water was available and potentially explored by the plant root system.

  6. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  7. Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  8. Applying MDA to SDR for Space to Model Real-time Issues

    Science.gov (United States)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  9. Use of geostatistics on broiler production for evaluation of different minimum ventilation systems during brooding phase

    Directory of Open Access Journals (Sweden)

    Thayla Morandi Ridolfi de Carvalho

    2012-01-01

    Full Text Available The objective of this research was to evaluate different minimum ventilation systems, in relation to air quality and thermal comfort using geostatistics in brooding phase. The minimum ventilation systems were: Blue House I: exhaust fans + curtain management (end of the building; Blue House II: exhaust fans + side curtain management; and Dark House: exhaust fans + flag. The climate variables evaluated were: dry bulb temperature, relative humidity, air velocity, carbon dioxide and ammonia concentration, during winter time, at 9 a.m., in 80 equidistant points in brooding area. Data were evaluated by geostatistic technique. The results indicate that Wider broiler houses (above 15.0 m width present the greatest ammonia and humidity concentration. Blue House II present the best results in relation to air quality. However, none of the studied broiler houses present an ideal thermal comfort.

  10. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    Science.gov (United States)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  11. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    Science.gov (United States)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2011-05-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation

  12. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    Science.gov (United States)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  13. Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models

    Science.gov (United States)

    Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.

    2013-12-01

    We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.

  14. Multivariate time series with linear state space structure

    CERN Document Server

    Gómez, Víctor

    2016-01-01

    This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students wor...

  15. Risk Assessment of Sediment Pollution Using Geostatistical Simulations

    Science.gov (United States)

    Golay, J.; Kanevski, M.

    2012-04-01

    Environmental monitoring networks (EMN) discreetly measure the intensities of continuous phenomena (e.g. pollution, temperature, etc.). Spatial prediction models, like kriging, are then used for modeling. But, they give rise to smooth representations of phenomena which leads to overestimations or underestimations of extreme values. Moreover, they do not reproduce the spatial variability of the original data and the corresponding uncertainties. When dealing with risk assessment, this is unacceptable, since extreme values must be retrieved and probabilities of exceeding given thresholds must be computed [Kanevski et al., 2009]. In order to overcome these obstacles, geostatistics provides another approach: conditional stochastic simulations. Here, the basic idea is to generate multiple estimates of variable values (e.g. pollution concentration) at every location of interest which are calculated as stochastic realizations of an unknown random function (see, for example, [Kanevski, 2008], where both theoretical concepts and real data case studies are presented in detail). Many algorithms implement this approach. The most widely used in spatial modeling are sequential Gaussian simulations/cosimulations, sequential indicator simulations/cosimulations and direct simulations. In the present study, several algorithms of geostatistical conditional simulations were applied on real data collected from Lake Geneva. The main objectives were to compare their effectiveness in reproducing global statistics (histograms, variograms) and the way they characterize the variability and uncertainty of the contamination patterns. The dataset is composed of 200 measurements of the contamination of the lake sediments by heavy metals (i.e. Cadmium, Mercury, Zinc, Copper, Titanium and Chromium). The results obtained show some differences highlighting that risk assessment can be influenced by the algorithm it relies on. Moreover, hybrid models based on machine learning algorithms and

  16. Geostatistical methods for radiological evaluation and risk analysis of contaminated premises

    International Nuclear Information System (INIS)

    Desnoyers, Y.; Jeannee, N.; Chiles, J.P.; Dubot, D.

    2009-01-01

    Full text: At the end of process equipment dismantling, the complete decontamination of nuclear facilities requires the radiological assessment of residual activity levels of building structures. As stated by the IAEA, 'Segregation and characterization of contaminated materials are the key elements of waste minimization'. From this point of view, the set up of an appropriate evaluation methodology is of primordial importance. The radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical, functional and qualitative information. Then, a systematic (exhaustive or not) control of the emergent signal is performed by means of in situ measurement methods such as surface control device combined with in situ gamma spectrometry. Besides, in order to assess the contamination depth, samples can be collected from boreholes at several locations within the premises and analyzed. Combined with historical information and emergent signal maps, such data improve and reinforce the preliminary waste zoning. In order to provide reliable estimates while avoiding supplementary investigation costs, there is therefore a crucial need for sampling optimization methods together with appropriate data processing techniques. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. In this case, geostatistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, which are essential decision-making tools for decommissioning and dismantling projects of nuclear installations. Besides, the ability of this geostatistical framework to provide answers to several key issues that generally occur during the clean-up preparation phase is discussed: How to optimise the investigation costs? How to deal with data quality issues? How to consistently take into account auxiliary information such as historical

  17. Spatial Downscaling of TRMM Precipitation Using Geostatistics and Fine Scale Environmental Variables

    Directory of Open Access Journals (Sweden)

    No-Wook Park

    2013-01-01

    Full Text Available A geostatistical downscaling scheme is presented and can generate fine scale precipitation information from coarse scale Tropical Rainfall Measuring Mission (TRMM data by incorporating auxiliary fine scale environmental variables. Within the geostatistical framework, the TRMM precipitation data are first decomposed into trend and residual components. Quantitative relationships between coarse scale TRMM data and environmental variables are then estimated via regression analysis and used to derive trend components at a fine scale. Next, the residual components, which are the differences between the trend components and the original TRMM data, are then downscaled at a target fine scale via area-to-point kriging. The trend and residual components are finally added to generate fine scale precipitation estimates. Stochastic simulation is also applied to the residual components in order to generate multiple alternative realizations and to compute uncertainty measures. From an experiment using a digital elevation model (DEM and normalized difference vegetation index (NDVI, the geostatistical downscaling scheme generated the downscaling results that reflected detailed characteristics with better predictive performance, when compared with downscaling without the environmental variables. Multiple realizations and uncertainty measures from simulation also provided useful information for interpretations and further environmental modeling.

  18. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    International Nuclear Information System (INIS)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing

  19. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing.

  20. 4th European Conference on Geostatistics for Environmental Applications

    CERN Document Server

    Carrera, Jesus; Gómez-Hernández, José

    2004-01-01

    The fourth edition of the European Conference on Geostatistics for Environmental Applications (geoENV IV) took place in Barcelona, November 27-29, 2002. As a proof that there is an increasing interest in environmental issues in the geostatistical community, the conference attracted over 100 participants, mostly Europeans (up to 10 European countries were represented), but also from other countries in the world. Only 46 contributions, selected out of around 100 submitted papers, were invited to be presented orally during the conference. Additionally 30 authors were invited to present their work in poster format during a special session. All oral and poster contributors were invited to submit their work to be considered for publication in this Kluwer series. All papers underwent a reviewing process, which consisted on two reviewers for oral presentations and one reviewer for posters. The book opens with one keynote paper by Philippe Naveau. It is followed by 40 papers that correspond to those presented orally d...

  1. The science of space-time

    International Nuclear Information System (INIS)

    Raine, D.J.; Heller, M.

    1981-01-01

    Analyzing the development of the structure of space-time from the theory of Aristotle to the present day, the present work attempts to sketch a science of relativistic mechanics. The concept of relativity is discussed in relation to the way in which space-time splits up into space and time, and in relation to Mach's principle concerning the relativity of inertia. Particular attention is given to the following topics: Aristotelian dynamics Copernican kinematics Newtonian dynamics the space-time of classical dynamics classical space-time in the presence of gravity the space-time of special relativity the space-time of general relativity solutions and problems in general relativity Mach's principle and the dynamics of space-time theories of inertial mass the integral formation of general relativity and the frontiers of relativity

  2. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    Science.gov (United States)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  3. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Science.gov (United States)

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  4. Application of Bayesian geostatistics for evaluation of mass discharge uncertainty at contaminated sites

    Science.gov (United States)

    Troldborg, Mads; Nowak, Wolfgang; Lange, Ida V.; Santos, Marta C.; Binning, Philip J.; Bjerg, Poul L.

    2012-09-01

    Mass discharge estimates are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Such estimates are, however, rather uncertain as they integrate uncertain spatial distributions of both concentration and groundwater flow. Here a geostatistical simulation method for quantifying the uncertainty of the mass discharge across a multilevel control plane is presented. The method accounts for (1) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics, (2) measurement uncertainty, and (3) uncertain source zone and transport parameters. The method generates conditional realizations of the spatial flow and concentration distribution. An analytical macrodispersive transport solution is employed to simulate the mean concentration distribution, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. The method has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is demonstrated on a field site contaminated with chlorinated ethenes. For this site, we show that including a physically meaningful concentration trend and the cosimulation of hydraulic conductivity and hydraulic gradient across the transect helps constrain the mass discharge uncertainty. The number of sampling points required for accurate mass discharge estimation and the relative influence of different data types on mass discharge uncertainty is discussed.

  5. Geostatistical enhancement of european hydrological predictions

    Science.gov (United States)

    Pugliese, Alessio; Castellarin, Attilio; Parajka, Juraj; Arheimer, Berit; Bagli, Stefano; Mazzoli, Paolo; Montanari, Alberto; Blöschl, Günter

    2016-04-01

    Geostatistical Enhancement of European Hydrological Prediction (GEEHP) is a research experiment developed within the EU funded SWITCH-ON project, which proposes to conduct comparative experiments in a virtual laboratory in order to share water-related information and tackle changes in the hydrosphere for operational needs (http://www.water-switch-on.eu). The main objective of GEEHP deals with the prediction of streamflow indices and signatures in ungauged basins at different spatial scales. In particular, among several possible hydrological signatures we focus in our experiment on the prediction of flow-duration curves (FDCs) along the stream-network, which has attracted an increasing scientific attention in the last decades due to the large number of practical and technical applications of the curves (e.g. hydropower potential estimation, riverine habitat suitability and ecological assessments, etc.). We apply a geostatistical procedure based on Top-kriging, which has been recently shown to be particularly reliable and easy-to-use regionalization approach, employing two different type of streamflow data: pan-European E-HYPE simulations (http://hypeweb.smhi.se/europehype) and observed daily streamflow series collected in two pilot study regions, i.e. Tyrol (merging data from Austrian and Italian stream gauging networks) and Sweden. The merger of the two study regions results in a rather large area (~450000 km2) and might be considered as a proxy for a pan-European application of the approach. In a first phase, we implement a bidirectional validation, i.e. E-HYPE catchments are set as training sites to predict FDCs at the same sites where observed data are available, and vice-versa. Such a validation procedure reveals (1) the usability of the proposed approach for predicting the FDCs over the entire river network of interest using alternatively observed data and E-HYPE simulations and (2) the accuracy of E-HYPE-based predictions of FDCs in ungauged sites. In a

  6. Geostatistical inference using crosshole ground-penetrating radar

    DEFF Research Database (Denmark)

    Looms, Majken C; Hansen, Thomas Mejer; Cordua, Knud Skou

    2010-01-01

    of the subsurface are used to evaluate the uncertainty of the inversion estimate. We have explored the full potential of the geostatistical inference method using several synthetic models of varying correlation structures and have tested the influence of different assumptions concerning the choice of covariance...... reflection profile. Furthermore, the inferred values of the subsurface global variance and the mean velocity have been corroborated with moisturecontent measurements, obtained gravimetrically from samples collected at the field site....

  7. Geostatistical analysis of prevailing groundwater conditions and potential solute migration at Elstow, Bedfordshire

    International Nuclear Information System (INIS)

    MacKay, R.; Cooper, T.A.; Porter, J.D.; O'Connell, P.E.; Metcalfe, A.V.

    1988-06-01

    A geostatistical approach is applied in a study of the potential migration of contaminants from a hypothetical waste disposal facility near Elstow, Bedfordshire. A deterministic numerical model of groundwater flow in the Kellaways Sands formation and adjacent layers is coupled with geostatistical simulation of the heterogeneous transmissivity field of this principal formation. A particle tracking technique is used to predict the migration pathways for alternative realisations of flow. Alternative statistical descriptions of the spatial structure of the transmissivity field are implemented and the temporal and spatial distributions of escape of contaminants to the biosphere are investigated. (author)

  8. Estimating Rainfall in Rodrigues by Geostatistics: (A) Theory | Proag ...

    African Journals Online (AJOL)

    This paper introduces the geostatistical method. Originally devised to treat problems that arise when conventional statistical theory is used in estimating changes in ore grade within a mine, it is, however, an abstract theory of statistical behaviour that is applicable to many circumstances in different areas of geology and other ...

  9. Time-Space Topology Optimization

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard

    2008-01-01

    A method for space-time topology optimization is outlined. The space-time optimization strategy produces structures with optimized material distributions that vary in space and in time. The method is demonstrated for one-dimensional wave propagation in an elastic bar that has a time-dependent Young......’s modulus and is subjected to a transient load. In the example an optimized dynamic structure is demonstrated that compresses a propagating Gauss pulse....

  10. Calculation of the relative efficiency of thermoluminescent detectors to space radiation

    International Nuclear Information System (INIS)

    Bilski, P.

    2011-01-01

    Thermoluminescent (TL) detectors are often used for measurements of radiation doses in space. While space radiation is composed of a mixture of heavy charged particles, the relative TL efficiency depends on ionization density. The question therefore arises: what is the relative efficiency of TLDs to the radiation present in space? In the attempt to answer this question, the relative TL efficiency of two types of lithium fluoride detectors for space radiation has been calculated, based on the theoretical space spectra and the experimental values of TL efficiency to ion beams. The TL efficiency of LiF:Mg,Ti detectors for radiation encountered at typical low-Earth’s orbit was found to be close to unity, justifying a common application of these TLDs to space dosimetry. The TL efficiency of LiF:Mg,Cu,P detectors is significantly lower. It was found that a shielding may have a significant influence on the relative response of TLDs, due to changes caused in the radiation spectrum. In case of application of TLDs outside the Earth’s magnetosphere, one should expect lower relative efficiency than at the low-Earth’s orbit.

  11. Exploring prediction uncertainty of spatial data in geostatistical and machine learning Approaches

    Science.gov (United States)

    Klump, J. F.; Fouedjio, F.

    2017-12-01

    Geostatistical methods such as kriging with external drift as well as machine learning techniques such as quantile regression forest have been intensively used for modelling spatial data. In addition to providing predictions for target variables, both approaches are able to deliver a quantification of the uncertainty associated with the prediction at a target location. Geostatistical approaches are, by essence, adequate for providing such prediction uncertainties and their behaviour is well understood. However, they often require significant data pre-processing and rely on assumptions that are rarely met in practice. Machine learning algorithms such as random forest regression, on the other hand, require less data pre-processing and are non-parametric. This makes the application of machine learning algorithms to geostatistical problems an attractive proposition. The objective of this study is to compare kriging with external drift and quantile regression forest with respect to their ability to deliver reliable prediction uncertainties of spatial data. In our comparison we use both simulated and real world datasets. Apart from classical performance indicators, comparisons make use of accuracy plots, probability interval width plots, and the visual examinations of the uncertainty maps provided by the two approaches. By comparing random forest regression to kriging we found that both methods produced comparable maps of estimated values for our variables of interest. However, the measure of uncertainty provided by random forest seems to be quite different to the measure of uncertainty provided by kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. These preliminary results raise questions about assessing the risks associated with decisions based on the predictions from geostatistical and machine learning algorithms in a spatial context, e.g. mineral exploration.

  12. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    Science.gov (United States)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that

  13. Integrating address geocoding, land use regression, and spatiotemporal geostatistical estimation for groundwater tetrachloroethylene.

    Science.gov (United States)

    Messier, Kyle P; Akita, Yasuyuki; Serre, Marc L

    2012-03-06

    Geographic information systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend.

  14. Integration of dynamical data in a geostatistical model of reservoir; Integration des donnees dynamiques dans un modele geostatistique de reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Costa Reis, L.

    2001-01-01

    We have developed in this thesis a methodology of integrated characterization of heterogeneous reservoirs, from geologic modeling to history matching. This methodology is applied to the reservoir PBR, situated in Campos Basin, offshore Brazil, which has been producing since June 1979. This work is an extension of two other thesis concerning geologic and geostatistical modeling of the reservoir PBR from well data and seismic information. We extended the geostatistical litho-type model to the whole reservoir by using a particular approach of the non-stationary truncated Gaussian simulation method. This approach facilitated the application of the gradual deformation method to history matching. The main stages of the methodology for dynamic data integration in a geostatistical reservoir model are presented. We constructed a reservoir model and the initial difficulties in the history matching led us to modify some choices in the geological, geostatistical and flow models. These difficulties show the importance of dynamic data integration in reservoir modeling. The petrophysical property assignment within the litho-types was done by using well test data. We used an inversion procedure to evaluate the petrophysical parameters of the litho-types. The up-scaling is a necessary stage to reduce the flow simulation time. We compared several up-scaling methods and we show that the passage from the fine geostatistical model to the coarse flow model should be done very carefully. The choice of the fitting parameter depends on the objective of the study. In the case of the reservoir PBR, where water is injected in order to improve the oil recovery, the water rate of the producing wells is directly related to the reservoir heterogeneity. Thus, the water rate was chosen as the fitting parameter. We obtained significant improvements in the history matching of the reservoir PBR. First, by using a method we have proposed, called patchwork. This method allows us to built a coherent

  15. Space-Time Quantum Imaging

    Directory of Open Access Journals (Sweden)

    Ronald E. Meyers

    2015-03-01

    Full Text Available We report on an experimental and theoretical investigation of quantum imaging where the images are stored in both space and time. Ghost images of remote objects are produced with either one or two beams of chaotic laser light generated by a rotating ground glass and two sensors measuring the reference field and bucket field at different space-time points. We further observe that the ghost images translate depending on the time delay between the sensor measurements. The ghost imaging experiments are performed both with and without turbulence. A discussion of the physics of the space-time imaging is presented in terms of quantum nonlocal two-photon analysis to support the experimental results. The theoretical model includes certain phase factors of the rotating ground glass. These experiments demonstrated a means to investigate the time and space aspects of ghost imaging and showed that ghost imaging contains more information per measured photon than was previously recognized where multiple ghost images are stored within the same ghost imaging data sets. This suggests new pathways to explore quantum information stored not only in multi-photon coincidence information but also in time delayed multi-photon interference. The research is applicable to making enhanced space-time quantum images and videos of moving objects where the images are stored in both space and time.

  16. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  17. Recursive evaluation of space-time lattice Green's functions

    International Nuclear Information System (INIS)

    De Hon, Bastiaan P; Arnold, John M

    2012-01-01

    Up to a multiplicative constant, the lattice Green's function (LGF) as defined in condensed matter physics and lattice statistical mechanics is equivalent to the Z-domain counterpart of the finite-difference time-domain Green's function (GF) on a lattice. Expansion of a well-known integral representation for the LGF on a ν-dimensional hyper-cubic lattice in powers of Z −1 and application of the Chu–Vandermonde identity results in ν − 1 nested finite-sum representations for discrete space-time GFs. Due to severe numerical cancellations, these nested finite sums are of little practical use. For ν = 2, the finite sum may be evaluated in closed form in terms of a generalized hypergeometric function. For special lattice points, that representation simplifies considerably, while on the other hand the finite-difference stencil may be used to derive single-lattice-point second-order recurrence schemes for generating 2D discrete space-time GF time sequences on the fly. For arbitrary symbolic lattice points, Zeilberger's algorithm produces a third-order recurrence operator with polynomial coefficients of the sixth degree. The corresponding recurrence scheme constitutes the most efficient numerical method for the majority of lattice points, in spite of the fact that for explicit numeric lattice points the associated third-order recurrence operator is not the minimum recurrence operator. As regards the asymptotic bounds for the possible solutions to the recurrence scheme, Perron's theorem precludes factorial or exponential growth. Along horizontal lattices directions, rapid initial growth does occur, but poses no problems in augmented dynamic-range fixed precision arithmetic. By analysing long-distance wave propagation along a horizontal lattice direction, we have concluded that the chirp-up oscillations of the discrete space-time GF are the root cause of grid dispersion anisotropy. With each factor of ten increase in the lattice distance, one would have to roughly

  18. Time Synchronization and Distribution Mechanisms for Space Networks

    Science.gov (United States)

    Woo, Simon S.; Gao, Jay L.; Clare, Loren P.; Mills, David L.

    2011-01-01

    This work discusses research on the problems of synchronizing and distributing time information between spacecraft based on the Network Time Protocol (NTP), where NTP is a standard time synchronization protocol widely used in the terrestrial network. The Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol was designed and developed for synchronizing spacecraft that are in proximity where proximity is less than 100,000 km distant. A particular application is synchronization between a Mars orbiter and rover. Lunar scenarios as well as outer-planet deep space mother-ship-probe missions may also apply. Spacecraft with more accurate time information functions as a time-server, and the other spacecraft functions as a time-client. PITS can be easily integrated and adaptable to the CCSDS Proximity-1 Space Link Protocol with minor modifications. In particular, PITS can take advantage of the timestamping strategy that underlying link layer functionality provides for accurate time offset calculation. The PITS algorithm achieves time synchronization with eight consecutive space network time packet exchanges between two spacecraft. PITS can detect and avoid possible errors from receiving duplicate and out-of-order packets by comparing with the current state variables and timestamps. Further, PITS is able to detect error events and autonomously recover from unexpected events that can possibly occur during the time synchronization and distribution process. This capability achieves an additional level of protocol protection on top of CRC or Error Correction Codes. PITS is a lightweight and efficient protocol, eliminating the needs for explicit frame sequence number and long buffer storage. The PITS protocol is capable of providing time synchronization and distribution services for a more general domain where multiple entities need to achieve time synchronization using a single point-to-point link.

  19. Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation

    Science.gov (United States)

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    Coal seam degasification and its success are important for controlling methane, and thus for the health and safety of coal miners. During the course of degasification, properties of coal seams change. Thus, the changes in coal reservoir conditions and in-place gas content as well as methane emission potential into mines should be evaluated by examining time-dependent changes and the presence of major heterogeneities and geological discontinuities in the field. In this work, time-lapsed reservoir and fluid storage properties of the New Castle coal seam, Mary Lee/Blue Creek seam, and Jagger seam of Black Warrior Basin, Alabama, were determined from gas and water production history matching and production forecasting of vertical degasification wellbores. These properties were combined with isotherm and other important data to compute gas-in-place (GIP) and its change with time at borehole locations. Time-lapsed training images (TIs) of GIP and GIP difference corresponding to each coal and date were generated by using these point-wise data and Voronoi decomposition on the TI grid, which included faults as discontinuities for expansion of Voronoi regions. Filter-based multiple-point geostatistical simulations, which were preferred in this study due to anisotropies and discontinuities in the area, were used to predict time-lapsed GIP distributions within the study area. Performed simulations were used for mapping spatial time-lapsed methane quantities as well as their uncertainties within the study area.

  20. Metric space construction for the boundary of space-time

    International Nuclear Information System (INIS)

    Meyer, D.A.

    1986-01-01

    A distance function between points in space-time is defined and used to consider the manifold as a topological metric space. The properties of the distance function are investigated: conditions under which the metric and manifold topologies agree, the relationship with the causal structure of the space-time and with the maximum lifetime function of Wald and Yip, and in terms of the space of causal curves. The space-time is then completed as a topological metric space; the resultant boundary is compared with the causal boundary and is also calculated for some pertinent examples

  1. Geostatistics for Mapping Leaf Area Index over a Cropland Landscape: Efficiency Sampling Assessment

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Haro

    2010-11-01

    Full Text Available This paper evaluates the performance of spatial methods to estimate leaf area index (LAI fields from ground-based measurements at high-spatial resolution over a cropland landscape. Three geostatistical model variants of the kriging technique, the ordinary kriging (OK, the collocated cokriging (CKC and kriging with an external drift (KED are used. The study focused on the influence of the spatial sampling protocol, auxiliary information, and spatial resolution in the estimates. The main advantage of these models lies in the possibility of considering the spatial dependence of the data and, in the case of the KED and CKC, the auxiliary information for each location used for prediction purposes. A high-resolution NDVI image computed from SPOT TOA reflectance data is used as an auxiliary variable in LAI predictions. The CKC and KED predictions have proven the relevance of the auxiliary information to reproduce the spatial pattern at local scales, proving the KED model to be the best estimator when a non-stationary trend is observed. Advantages and limitations of the methods in LAI field predictions for two systematic and two stratified spatial samplings are discussed for high (20 m, medium (300 m and coarse (1 km spatial scales. The KED has exhibited the best observed local accuracy for all the spatial samplings. Meanwhile, the OK model provides comparable results when a well stratified sampling scheme is considered by land cover.

  2. Overview and technical and practical aspects for use of geostatistics in hazardous-, toxic-, and radioactive-waste-site investigations

    International Nuclear Information System (INIS)

    Bossong, C.R.; Karlinger, M.R.; Troutman, B.M.; Vecchia, A.V.

    1999-01-01

    Technical and practical aspects of applying geostatistics are developed for individuals involved in investigation at hazardous-, toxic-, and radioactive-waste sites. Important geostatistical concepts, such as variograms and ordinary, universal, and indicator kriging, are described in general terms for introductory purposes and in more detail for practical applications. Variogram modeling using measured ground-water elevation data is described in detail to illustrate principles of stationarity, anisotropy, transformations, and cross validation. Several examples of kriging applications are described using ground-water-level elevations, bedrock elevations, and ground-water-quality data. A review of contemporary literature and selected public domain software associated with geostatistics also is provided, as is a discussion of alternative methods for spatial modeling, including inverse distance weighting, triangulation, splines, trend-surface analysis, and simulation

  3. Geostatistics applied to estimation of uranium bearing ore reserves

    International Nuclear Information System (INIS)

    Urbina Galan, L.I.

    1982-01-01

    A computer assisted method for assessing uranium-bearing ore deposit reserves is analyzed. Determinations of quality-thickness, namely quality by thickness calculations of mineralization, were obtained by means of a mathematical method known as the theory of rational variables for each drill-hole layer. Geostatistical results were derived based on a Fortrand computer program on a DEC 20/40 system. (author)

  4. A space-time mixed galerkin marching-on-in-time scheme for the time-domain combined field integral equation

    KAUST Repository

    Beghein, Yves

    2013-03-01

    The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.

  5. Geostatistical analysis and kriging of Hexachlorocyclohexane residues in topsoil from Tianjin, China

    International Nuclear Information System (INIS)

    Li, B.G.; Cao, J.; Liu, W.X.; Shen, W.R.; Wang, X.J.; Tao, S.

    2006-01-01

    A previously published data set of HCH isomer concentrations in topsoil samples from Tianjin, China, was subjected to geospatial analysis. Semivariograms were calculated and modeled using geostatistical techniques. Parameters of semivariogram models were analyzed and compared for four HCH isomers. Two-dimensional ordinary block kriging was applied to HCH isomers data set for mapping purposes. Dot maps and gray-scaled raster maps of HCH concentrations were presented based on kriging results. The appropriateness of the kriging procedure for mapping purposes was evaluated based on the kriging errors and kriging variances. It was found that ordinary block kriging can be applied to interpolate HCH concentrations in Tianjin topsoil with acceptable accuracy for mapping purposes. - Geostatistical analysis and kriging were applied to HCH concentrations in topsoil of Tianjin, China for mapping purposes

  6. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  7. A geostatistical method applied to the geochemical study of the Chichinautzin Volcanic Field in Mexico

    Science.gov (United States)

    Robidoux, P.; Roberge, J.; Urbina Oviedo, C. A.

    2011-12-01

    The origin of magmatism and the role of the subducted Coco's Plate in the Chichinautzin volcanic field (CVF), Mexico is still a subject of debate. It has been established that mafic magmas of alkali type (subduction) and calc-alkali type (OIB) are produced in the CVF and both groups cannot be related by simple fractional crystallization. Therefore, many geochemical studies have been done, and many models have been proposed. The main goal of the work present here is to provide a new tool for the visualization and interpretation of geochemical data using geostatistics and geospatial analysis techniques. It contains a complete geodatabase built from referred samples over the 2500 km2 area of CVF and its neighbour stratovolcanoes (Popocatepetl, Iztaccihuatl and Nevado de Toluca). From this database, map of different geochemical markers were done to visualise geochemical signature in a geographical manner, to test the statistic distribution with a cartographic technique and highlight any spatial correlations. The distribution and regionalization of the geochemical signatures can be viewed in a two-dimensional space using a specific spatial analysis tools from a Geographic Information System (GIS). The model of spatial distribution is tested with Linear Decrease (LD) and Inverse Distance Weight (IDW) interpolation technique because they best represent the geostatistical characteristics of the geodatabase. We found that ratio of Ba/Nb, Nb/Ta, Th/Nb show first order tendency, which means visible spatial variation over a large scale area. Monogenetic volcanoes in the center of the CVF have distinct values compare to those of the Popocatepetl-Iztaccihuatl polygenetic complex which are spatially well defined. Inside the Valley of Mexico, a large quantity of monogenetic cone in the eastern portion of CVF has ratios similar to the Iztaccihuatl and Popocatepetl complex. Other ratios like alkalis vs SiO2, V/Ti, La/Yb, Zr/Y show different spatial tendencies. In that case, second

  8. Geostatistical integration and uncertainty in pollutant concentration surface under preferential sampling

    Directory of Open Access Journals (Sweden)

    Laura Grisotto

    2016-04-01

    Full Text Available In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling.

  9. Gauge Gravity and Space-Time

    OpenAIRE

    Wu, Ning

    2012-01-01

    When we discuss problems on gravity, we can not avoid some fundamental physical problems, such as space-time, inertia, and inertial reference frame. The goal of this paper is to discuss the logic system of gravity theory and the problems of space-time, inertia, and inertial reference frame. The goal of this paper is to set up the theory on space-time in gauge theory of gravity. Based on this theory, it is possible for human kind to manipulate physical space-time on earth, and produce a machin...

  10. Forecasting Interest Rates Using Geostatistical Techniques

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2015-11-01

    Full Text Available Geostatistical spatial models are widely used in many applied fields to forecast data observed on continuous three-dimensional surfaces. We propose to extend their use to finance and, in particular, to forecasting yield curves. We present the results of an empirical application where we apply the proposed method to forecast Euro Zero Rates (2003–2014 using the Ordinary Kriging method based on the anisotropic variogram. Furthermore, a comparison with other recent methods for forecasting yield curves is proposed. The results show that the model is characterized by good levels of predictions’ accuracy and it is competitive with the other forecasting models considered.

  11. Unsupervised classification of multivariate geostatistical data: Two algorithms

    Science.gov (United States)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  12. Efficient Means of Detecting Neutral Atoms in Space

    Science.gov (United States)

    Zinicola, W. N.

    2006-12-01

    This summer, The Society of Physics Students granted me the opportunity to participate in an internship for The National Aeronautics and Space Administration (NASA) and The University of Maryland. Our chief interest was analyzing low energy neutral atoms that were created from random interactions of ions in space plasma. From detecting these neutrals one can project a image of what the plasma's composition is, and how this plasma changes through interactions with the solar wind. Presently, low energy neutral atom detectors have poor efficiency, typically in the range of 1%. Our goal was to increase this efficiency. To detect low energy neutrals we must first convert them from neutral molecules to negatively charged ions. Once converted, these "new" negatively charged ions can be easily detected and completely analyzed giving us information about their energy, mass, and instantaneous direction. The efficiency of the detector is drastically affected by the surface used for converting these neutrals. My job was first to create thin metal conversion surfaces. Then, using an X-ray photoelectron spectrometer, analyze atomic surface composition and gather work function values. Once the work function values were known we placed the surfaces in our neutral detector and measured their conversion efficiencies. Finally, a relation between the work function of the metal surface an its conversion efficiency was generated. With this relationship accurately measured one could use this information to help give suggestions on what surface would be the best to increase our detection efficiency. If we could increase the efficiency of these low energy neutral atom detectors by even 1% we would be able to decrease the size of the detector therefore making it cheaper and more applicable for space exploration.* * A special thanks to Dr. Michael Coplan of the University of Maryland for his support and guidance through all my research.

  13. Geostatistical risk estimation at waste disposal sites in the presence of hot spots

    International Nuclear Information System (INIS)

    Komnitsas, Kostas; Modis, Kostas

    2009-01-01

    The present paper aims to estimate risk by using geostatistics at the wider coal mining/waste disposal site of Belkovskaya, Tula region, in Russia. In this area the presence of hot spots causes a spatial trend in the mean value of the random field and a non-Gaussian data distribution. Prior to application of geostatistics, subtraction of trend and appropriate smoothing and transformation of the data into a Gaussian form were carried out; risk maps were then generated for the wider study area in order to assess the probability of exceeding risk thresholds. Finally, the present paper discusses the need for homogenization of soil risk thresholds regarding hazardous elements that will enhance reliability of risk estimation and enable application of appropriate rehabilitation actions in contaminated areas.

  14. Geostatistical Investigations of Displacements on the Basis of Data from the Geodetic Monitoring of a Hydrotechnical Object

    Science.gov (United States)

    Namysłowska-Wilczyńska, Barbara; Wynalek, Janusz

    2017-12-01

    Geostatistical methods make the analysis of measurement data possible. This article presents the problems directed towards the use of geostatistics in spatial analysis of displacements based on geodetic monitoring. Using methods of applied (spatial) statistics, the research deals with interesting and current issues connected to space-time analysis, modeling displacements and deformations, as applied to any large-area objects on which geodetic monitoring is conducted (e.g., water dams, urban areas in the vicinity of deep excavations, areas at a macro-regional scale subject to anthropogenic influences caused by mining, etc.). These problems are very crucial, especially for safety assessment of important hydrotechnical constructions, as well as for modeling and estimating mining damage. Based on the geodetic monitoring data, a substantial basic empirical material was created, comprising many years of research results concerning displacements of controlled points situated on the crown and foreland of an exemplary earth dam, and used to assess the behaviour and safety of the object during its whole operating period. A research method at a macro-regional scale was applied to investigate some phenomena connected with the operation of the analysed big hydrotechnical construction. Applying a semivariogram function enabled the spatial variability analysis of displacements. Isotropic empirical semivariograms were calculated and then, theoretical parameters of analytical functions were determined, which approximated the courses of the mentioned empirical variability measure. Using ordinary (block) kriging at the grid nodes of an elementary spatial grid covering the analysed object, the values of the Z* estimated means of displacements were calculated together with the accompanying assessment of uncertainty estimation - a standard deviation of estimation σk. Raster maps of the distribution of estimated averages Z* and raster maps of deviations of estimation σk (in perspective

  15. Geostatistical methods for the integrated information; Metodos geoestadisticos para la integracion de informacion

    Energy Technology Data Exchange (ETDEWEB)

    Cassiraga, E F; Gomez-Hernandez, J J [Departamento de Ingenieria Hidraulica y Medio Ambiente, Universidad Politecnica de Valencia, Valencia (Spain)

    1996-10-01

    The main objective of this report is to describe the different geostatistical techniques to use the geophysical and hydrological parameters. We analyze the characteristics of estimation methods used in others studies.

  16. Assessment of effectiveness of geologic isolation systems: geostatistical modeling of pore velocity

    International Nuclear Information System (INIS)

    Devary, J.L.; Doctor, P.G.

    1981-06-01

    A significant part of evaluating a geologic formation as a nuclear waste repository involves the modeling of contaminant transport in the surrounding media in the event the repository is breached. The commonly used contaminant transport models are deterministic. However, the spatial variability of hydrologic field parameters introduces uncertainties into contaminant transport predictions. This paper discusses the application of geostatistical techniques to the modeling of spatially varying hydrologic field parameters required as input to contaminant transport analyses. Kriging estimation techniques were applied to Hanford Reservation field data to calculate hydraulic conductivity and the ground-water potential gradients. These quantities were statistically combined to estimate the groundwater pore velocity and to characterize the pore velocity estimation error. Combining geostatistical modeling techniques with product error propagation techniques results in an effective stochastic characterization of groundwater pore velocity, a hydrologic parameter required for contaminant transport analyses

  17. Space and time, matter and mind the relationship between reality and space-time

    CERN Document Server

    1994-01-01

    In principle, the elements of space and time cannot be measured. Therefore, the following question arises: How are reality and space-time related to each other? In this book, it is argued on the basis of many facts that reality is not embedded but projected onto space and time. We can never make statements about the actual reality outside (basic reality), but we can "only" form pictures of it. These are pictures of the same reality on different levels. From this point of view, the "hard" objects (matter) and the products of the mind are similar in character.

  18. Approximate solution of space and time fractional higher order phase field equation

    Science.gov (United States)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  19. Space, time and conservation laws

    International Nuclear Information System (INIS)

    Aronov, R.A.; Ugarov, V.A.

    1978-01-01

    The Neter theorem establishing correspondence between conservation laws and symmetry properties (space and time in particular) is considered. The theorem is based on one of the possible ways of finding equations of motion for a physical system. From a certain expression (action functional) equations of motion for a system can be obtained which do not contain new physical assertions in principal in comparison with the Newtonian laws. Neter suggested a way of deriving conservation laws by transforming space and time coordinates. Neter theorem consequences raise a number of problems: 1). Are conservation laws (energy, momentum) consequences of space and time symmetry properties. 2). Is it possible to obtain conservation laws in theory neglecting equations of motion. 3). What is of the primary importance: equations of motion, conservation laws or properties of space and time symmetry. It is shown that direct Neter theorem does not testify to stipulation of conservation laws by properties of space and time symmetry and symmetry properties of other non-space -time properties of material systems in objective reality. It says nothing of whether there is any subordination between symmetry properties and conservation laws

  20. Mercury emissions from coal combustion in Silesia, analysis using geostatistics

    Science.gov (United States)

    Zasina, Damian; Zawadzki, Jaroslaw

    2015-04-01

    Data provided by the UNEP's report on mercury [1] shows that solid fuel combustion in significant source of mercury emission to air. Silesia, located in southwestern Poland, is notably affected by mercury emission due to being one of the most industrialized Polish regions: the place of coal mining, production of metals, stone mining, mineral quarrying and chemical industry. Moreover, Silesia is the region with high population density. People are exposed to severe risk of mercury emitted from both: industrial and domestic sources (i.e. small household furnaces). Small sources have significant contribution to total emission of mercury. Official and statistical analysis, including prepared for international purposes [2] did not provide data about spatial distribution of the mercury emitted to air, however number of analysis on Polish public power and energy sector had been prepared so far [3; 4]. The distribution of locations exposed for mercury emission from small domestic sources is interesting matter merging information from various sources: statistical, economical and environmental. This paper presents geostatistical approach to distibution of mercury emission from coal combustion. Analysed data organized in 2 independent levels: individual, bottom-up approach derived from national emission reporting system [5; 6] and top down - regional data calculated basing on official statistics [7]. Analysis, that will be presented, will include comparison of spatial distributions of mercury emission using data derived from sources mentioned above. Investigation will include three voivodeships of Poland: Lower Silesian, Opole (voivodeship) and Silesian using selected geostatistical methodologies including ordinary kriging [8]. References [1] UNEP. Global Mercury Assessment 2013: Sources, Emissions, Releases and Environmental Transport. UNEP Chemicals Branch, Geneva, Switzerland, 2013. [2] NCEM. Poland's Informative Inventory Report 2014. NCEM at the IEP-NRI, 2014. http

  1. Kolmogorov Space in Time Series Data

    OpenAIRE

    Kanjamapornkul, K.; Pinčák, R.

    2016-01-01

    We provide the proof that the space of time series data is a Kolmogorov space with $T_{0}$-separation axiom using the loop space of time series data. In our approach we define a cyclic coordinate of intrinsic time scale of time series data after empirical mode decomposition. A spinor field of time series data comes from the rotation of data around price and time axis by defining a new extradimension to time series data. We show that there exist hidden eight dimensions in Kolmogorov space for ...

  2. Potential high efficiency solar cells: Applications from space photovoltaic research

    Science.gov (United States)

    Flood, D. J.

    1986-01-01

    NASA involvement in photovoltaic energy conversion research development and applications spans over two decades of continuous progress. Solar cell research and development programs conducted by the Lewis Research Center's Photovoltaic Branch have produced a sound technology base not only for the space program, but for terrestrial applications as well. The fundamental goals which have guided the NASA photovoltaic program are to improve the efficiency and lifetime, and to reduce the mass and cost of photovoltaic energy conversion devices and arrays for use in space. The major efforts in the current Lewis program are on high efficiency, single crystal GaAs planar and concentrator cells, radiation hard InP cells, and superlattice solar cells. A brief historical perspective of accomplishments in high efficiency space solar cells will be given, and current work in all of the above categories will be described. The applicability of space cell research and technology to terrestrial photovoltaics will be discussed.

  3. Trajectory data analyses for pedestrian space-time activity study.

    Science.gov (United States)

    Qi, Feng; Du, Fei

    2013-02-25

    It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission(1-3). An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data(4). Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an

  4. Time: the enigma of space

    Science.gov (United States)

    Yu, Francis T. S.

    2017-08-01

    In this article we have based on the laws of physics to illustrate the enigma time as creating our physical space (i.e., the universe). We have shown that without time there would be no physical substances, no space and no life. In reference to Einstein's energy equation, we see that energy and mass can be traded, and every mass can be treated as an Energy Reservoir. We have further shown that physical space cannot be embedded in absolute empty space and cannot have any absolute empty subspace in it. Since all physical substances existed with time, our cosmos is created by time and every substance including our universe is coexisted with time. Although time initiates the creation, it is the physical substances which presented to us the existence of time. We are not alone with almost absolute certainty. Someday we may find a right planet, once upon a time, had harbored a civilization for a short period of light years.

  5. DEM-based delineation for improving geostatistical interpolation of rainfall in mountainous region of Central Himalayas, India

    Science.gov (United States)

    Kumari, Madhuri; Singh, Chander Kumar; Bakimchandra, Oinam; Basistha, Ashoke

    2017-10-01

    In mountainous region with heterogeneous topography, the geostatistical modeling of the rainfall using global data set may not confirm to the intrinsic hypothesis of stationarity. This study was focused on improving the precision of the interpolated rainfall maps by spatial stratification in complex terrain. Predictions of the normal annual rainfall data were carried out by ordinary kriging, universal kriging, and co-kriging, using 80-point observations in the Indian Himalayas extending over an area of 53,484 km2. A two-step spatial clustering approach is proposed. In the first step, the study area was delineated into two regions namely lowland and upland based on the elevation derived from the digital elevation model. The delineation was based on the natural break classification method. In the next step, the rainfall data was clustered into two groups based on its spatial location in lowland or upland. The terrain ruggedness index (TRI) was incorporated as a co-variable in co-kriging interpolation algorithm. The precision of the kriged and co-kriged maps was assessed by two accuracy measures, root mean square error and Chatfield's percent better. It was observed that the stratification of rainfall data resulted in 5-20 % of increase in the performance efficiency of interpolation methods. Co-kriging outperformed the kriging models at annual and seasonal scale. The result illustrates that the stratification of the study area improves the stationarity characteristic of the point data, thus enhancing the precision of the interpolated rainfall maps derived using geostatistical methods.

  6. How to evaluate the risks of exceeding limits: geostatistical models and their application to air pollution

    International Nuclear Information System (INIS)

    Fouquet, Ch. de; Deraisme, J.; Bobbia, M.

    2007-01-01

    Geo-statistics is increasingly applied to the study of environmental risks in a variety of sectors, especially in the fields of soil decontamination and the evaluation of the risks due to air pollution. Geo-statistics offers a rigorous stochastic modeling approach that makes it possible to answer questions expressed in terms of uncertainty and risk. This article focusses on nonlinear geo-statistical methods, based on the Gaussian random function model, whose essential properties are summarised. We use two examples to characterize situations where direct and thus rapid methods provide appropriate solutions and cases that inevitably require more laborious simulation techniques. Exposure of the population of the Rouen metropolitan area to the risk of NO 2 pollution is assessed by simulations, but the surface area where the pollution exceeds the threshold limit can be easily estimated with nonlinear conditional expectation techniques. A second example is used to discuss the bias introduced by direct simulation, here of a percentile of daily SO 2 concentration for one year in the city of Le Havre; an operational solution is proposed. (authors)

  7. Space Weather and Real-Time Monitoring

    Directory of Open Access Journals (Sweden)

    S Watari

    2009-04-01

    Full Text Available Recent advance of information and communications technology enables to collect a large amount of ground-based and space-based observation data in real-time. The real-time data realize nowcast of space weather. This paper reports a history of space weather by the International Space Environment Service (ISES in association with the International Geophysical Year (IGY and importance of real-time monitoring in space weather.

  8. Geostatistical analyses and hazard assessment on soil lead in Silvermines area, Ireland

    International Nuclear Information System (INIS)

    McGrath, David; Zhang Chaosheng; Carton, Owen T.

    2004-01-01

    Spatial distribution and hazard assessment of soil lead in the mining site of Silvermines, Ireland, were investigated using statistics, geostatistics and geographic information system (GIS) techniques. Positively skewed distribution and possible outlying values of Pb and other heavy metals were observed. Box-Cox transformation was applied in order to achieve normality in the data set and to reduce the effect of outliers. Geostatistical analyses were carried out, including calculation of experimental variograms and model fitting. The ordinary point kriging estimates of Pb concentration were mapped. Kriging standard deviations were regarded as the standard deviations of the interpolated pixel values, and a second map was produced, that quantified the probability of Pb concentration higher than a threshold value of 1000 mg/kg. These maps provide valuable information for hazard assessment and for decision support. - A probability map was produced that was useful for hazard assessment and decision support

  9. Geostatistical analyses and hazard assessment on soil lead in Silvermines area, Ireland

    Energy Technology Data Exchange (ETDEWEB)

    McGrath, David; Zhang Chaosheng; Carton, Owen T

    2004-01-01

    Spatial distribution and hazard assessment of soil lead in the mining site of Silvermines, Ireland, were investigated using statistics, geostatistics and geographic information system (GIS) techniques. Positively skewed distribution and possible outlying values of Pb and other heavy metals were observed. Box-Cox transformation was applied in order to achieve normality in the data set and to reduce the effect of outliers. Geostatistical analyses were carried out, including calculation of experimental variograms and model fitting. The ordinary point kriging estimates of Pb concentration were mapped. Kriging standard deviations were regarded as the standard deviations of the interpolated pixel values, and a second map was produced, that quantified the probability of Pb concentration higher than a threshold value of 1000 mg/kg. These maps provide valuable information for hazard assessment and for decision support. - A probability map was produced that was useful for hazard assessment and decision support.

  10. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    International Nuclear Information System (INIS)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data

  11. Twistor Cosmology and Quantum Space-Time

    International Nuclear Information System (INIS)

    Brody, D.C.; Hughston, L.P.

    2005-01-01

    The purpose of this paper is to present a model of a 'quantum space-time' in which the global symmetries of space-time are unified in a coherent manner with the internal symmetries associated with the state space of quantum-mechanics. If we take into account the fact that these distinct families of symmetries should in some sense merge and become essentially indistinguishable in the unified regime, our framework may provide an approximate description of or elementary model for the structure of the universe at early times. The quantum elements employed in our characterisation of the geometry of space-time imply that the pseudo-Riemannian structure commonly regarded as an essential feature in relativistic theories must be dispensed with. Nevertheless, the causal structure and the physical kinematics of quantum space-time are shown to persist in a manner that remains highly analogous to the corresponding features of the classical theory. In the case of the simplest conformally flat cosmological models arising in this framework, the twistorial description of quantum space-time is shown to be effective in characterising the various physical and geometrical properties of the theory. As an example, a sixteen-dimensional analogue of the Friedmann-Robertson-Walker cosmologies is constructed, and its chronological development is analysed in some detail. More generally, whenever the dimension of a quantum space-time is an even perfect square, there exists a canonical way of breaking the global quantum space-time symmetry so that a generic point of quantum space-time can be consistently interpreted as a quantum operator taking values in Minkowski space. In this scenario, the breakdown of the fundamental symmetry of the theory is due to a loss of quantum entanglement between space-time and internal quantum degrees of freedom. It is thus possible to show in a certain specific sense that the classical space-time description is an emergent feature arising as a consequence of a

  12. Operationally efficient propulsion system study (OEPSS) data book. Volume 6; Space Transfer Propulsion Operational Efficiency Study Task of OEPSS

    Science.gov (United States)

    Harmon, Timothy J.

    1992-01-01

    This document is the final report for the Space Transfer Propulsion Operational Efficiency Study Task of the Operationally Efficient Propulsion System Study (OEPSS) conducted by the Rocketdyne Division of Rockwell International. This Study task studied, evaluated and identified design concepts and technologies which minimized launch and in-space operations and optimized in-space vehicle propulsion system operability.

  13. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  14. Space-time-modulated stochastic processes

    Science.gov (United States)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  15. Optical isolation based on space-time engineered asymmetric photonic band gaps

    Science.gov (United States)

    Chamanara, Nima; Taravati, Sajjad; Deck-Léger, Zoé-Lise; Caloz, Christophe

    2017-10-01

    Nonreciprocal electromagnetic devices play a crucial role in modern microwave and optical technologies. Conventional methods for realizing such systems are incompatible with integrated circuits. With recent advances in integrated photonics, the need for efficient on-chip magnetless nonreciprocal devices has become more pressing than ever. This paper leverages space-time engineered asymmetric photonic band gaps to generate optical isolation. It shows that a properly designed space-time modulated slab is highly reflective/transparent for opposite directions of propagation. The corresponding design is magnetless, accommodates low modulation frequencies, and can achieve very high isolation levels. An experimental proof of concept at microwave frequencies is provided.

  16. A geostatistical approach to the change-of-support problem and variable-support data fusion in spatial analysis

    Science.gov (United States)

    Wang, Jun; Wang, Yang; Zeng, Hui

    2016-01-01

    A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.

  17. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  18. Hyperbolic statics in space-time

    OpenAIRE

    Pavlov, Dmitry; Kokarev, Sergey

    2014-01-01

    Based on the concept of material event as an elementary material source that is concentrated on metric sphere of zero radius --- light-cone of Minkowski space-time, we deduce the analog of Coulomb's law for hyperbolic space-time field universally acting between the events of space-time. Collective field that enables interaction of world lines of a pair of particles at rest contains a standard 3-dimensional Coulomb's part and logarithmic addendum. We've found that the Coulomb's part depends on...

  19. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    Science.gov (United States)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  20. Applicability of geostatistical procedures for the evaluation of hydrogeological parameters of a fractured aquifer in the Ronneburg mine district

    International Nuclear Information System (INIS)

    Grasshoff, C.; Schetelig, K.; Tomschi, H.

    1998-01-01

    The following paper demonstrates, how a geostatistical approach can help interpolating hydrogeological parameters over a certain area. The basic elements developed by G. Matheron in the sixties are represented as the preconditions and assumptions, which provide the best results of the estimation. The variogram as the most important tool in geostatistics offers the opportunity to describe the correlating behaviour of a regionalized variable. Some kriging procedures are briefly introduced, which provide under varying circumstances estimating of non-measured values with the theoretical variogram-model. In the Ronneburg mine district 108 screened drill-holes could provide coefficients of hydraulic conductivity. These were interpolated with ordinary kriging over the whole investigation area. An error calculation was performed, which could prove the accuracy of the estimation. Short prospects point out some difficulties handling with geostatistic procedures and make suggestions for further investigations. (orig.) [de

  1. Efficient implementation of real-time programs under the VAX/VMS operating system

    Science.gov (United States)

    Johnson, S. C.

    1985-01-01

    Techniques for writing efficient real-time programs under the VAX/VMS oprating system are presented. Basic operations are presented for executing at real-time priority and for avoiding needlless processing delays. A highly efficient technique for accessing physical devices by mapping to the input/output space and accessing the device registrs directly is described. To illustrate the application of the technique, examples are included of different uses of the technique on three devices in the Langley Avionics Integration Research Lab (AIRLAB): the KW11-K dual programmable real-time clock, the Parallel Communications Link (PCL11-B) communication system, and the Datacom Synchronization Network. Timing data are included to demonstrate the performance improvements realized with these applications of the technique.

  2. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    International Nuclear Information System (INIS)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-01-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system

  3. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Science.gov (United States)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  4. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  5. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  6. Lightweight High Efficiency Electric Motors for Space Applications

    Science.gov (United States)

    Robertson, Glen A.; Tyler, Tony R.; Piper, P. J.

    2011-01-01

    Lightweight high efficiency electric motors are needed across a wide range of space applications from - thrust vector actuator control for launch and flight applications to - general vehicle, base camp habitat and experiment control for various mechanisms to - robotics for various stationary and mobile space exploration missions. QM Power?s Parallel Path Magnetic Technology Motors have slowly proven themselves to be a leading motor technology in this area; winning a NASA Phase II for "Lightweight High Efficiency Electric Motors and Actuators for Low Temperature Mobility and Robotics Applications" a US Army Phase II SBIR for "Improved Robot Actuator Motors for Medical Applications", an NSF Phase II SBIR for "Novel Low-Cost Electric Motors for Variable Speed Applications" and a DOE SBIR Phase I for "High Efficiency Commercial Refrigeration Motors" Parallel Path Magnetic Technology obtains the benefits of using permanent magnets while minimizing the historical trade-offs/limitations found in conventional permanent magnet designs. The resulting devices are smaller, lower weight, lower cost and have higher efficiency than competitive permanent magnet and non-permanent magnet designs. QM Power?s motors have been extensively tested and successfully validated by multiple commercial and aerospace customers and partners as Boeing Research and Technology. Prototypes have been made between 0.1 and 10 HP. They are also in the process of scaling motors to over 100kW with their development partners. In this paper, Parallel Path Magnetic Technology Motors will be discussed; specifically addressing their higher efficiency, higher power density, lighter weight, smaller physical size, higher low end torque, wider power zone, cooler temperatures, and greater reliability with lower cost and significant environment benefit for the same peak output power compared to typically motors. A further discussion on the inherent redundancy of these motors for space applications will be provided.

  7. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  8. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  9. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    Science.gov (United States)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER

  10. 10 Management Controller for Time and Space Partitioning Architectures

    Science.gov (United States)

    Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien

    2015-09-01

    The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.

  11. A simple and fast representation space for classifying complex time series

    International Nuclear Information System (INIS)

    Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.

    2017-01-01

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease. - Highlights: • A bidimensional scheme has been tested for classification purposes. • A multiscale generalization is introduced. • Several practical applications confirm its usefulness. • Different sets of financial and physiological data are efficiently distinguished. • This multiscale bidimensional approach has high potential as discriminative tool.

  12. A simple and fast representation space for classifying complex time series

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Bariviera, Aurelio F., E-mail: aurelio.fernandez@urv.cat [Department of Business, Universitat Rovira i Virgili, Av. Universitat 1, 43204 Reus (Spain); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-03-18

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease. - Highlights: • A bidimensional scheme has been tested for classification purposes. • A multiscale generalization is introduced. • Several practical applications confirm its usefulness. • Different sets of financial and physiological data are efficiently distinguished. • This multiscale bidimensional approach has high potential as discriminative tool.

  13. Geostatistical analysis of the flood risk perception queries in the village of Navaluenga (Central Spain)

    Science.gov (United States)

    Guardiola-Albert, Carolina; Díez-Herrero, Andrés; Amérigo, María; García, Juan Antonio; María Bodoque, José; Fernández-Naranjo, Nuria

    2017-04-01

    Flash floods provoke a high average mortality as they are usually unexpected events which evolve rapidly and affect relatively small areas. The short time available for minimizing risks requires preparedness and response actions to be put into practice. Therefore, it is necessary the development of emergency response plans to evacuate and rescue people in the context of a flash-flood hazard. In this framework, risk management has to integrate the social dimension of flash-flooding and its spatial distribution by understanding the characteristics of local communities in order to enhance community resilience during a flash-flood. In this regard, the flash-flood social risk perception of the village of Navaluenga (Central Spain) has been recently assessed, as well as the level of awareness of civil protection and emergency management strategies (Bodoque et al., 2016). This has been done interviewing 254 adults, representing roughly 12% of the population census. The present study wants to go further in the analysis of the resulting questionnaires, incorporating in the analysis the location of home spatial coordinates in order to characterize the spatial distribution and possible geographical interpretation of flood risk perception. We apply geostatistical methods to analyze spatial relations of social risk perception and level of awareness with distance to the rivers (Alberche and Chorrerón) or to the flood-prone areas (50-year, 100-year and 500-year flood plains). We want to discover spatial patterns, if any, using correlation functions (variograms). Geostatistical analyses results can help to either confirm the logical pattern (i.e., less awareness further to the rivers or high return period of flooding) or reveal departures from expected. It can also be possible to identify hot spots, cold spots, and spatial outliers. The interpretation of these spatial patterns can give valuable information to define strategies to improve the awareness regarding preparedness and

  14. Time travel in Goedel's space

    International Nuclear Information System (INIS)

    Pfarr, J.

    1981-01-01

    An analysis is presented of the motion of test particles in Goedel's universe. Both geodesical and nongeodesical motions are considered; the accelerations for nongeodesical motions are given. Examples for closed timelike world lines are shown and the dynamical conditions for time travel in Goedel's space-time are discussed. It is shown that these conditions alone do not suffice to exclude time travel in Goedel's space-time. (author)

  15. Quantum fields in curved space-times

    International Nuclear Information System (INIS)

    Ashtekar, A.; Magnon, A.

    1975-01-01

    The problem of obtaining a quantum description of the (real) Klein-Gordon system in a given curved space-time is discussed. An algebraic approach is used. The *-algebra of quantum operators is constructed explicitly and the problem of finding its *-representation is reduced to that of selecting a suitable complex structure on the real vector space of the solutions of the (classical) Klein-Gordon equation. Since, in a static space-time, there already exists, a satisfactory quantum field theory, in this case one already knows what the 'correct' complex structure is. A physical characterization of this 'correct' complex structure is obtained. This characterization is used to extend quantum field theory to non-static space-times. Stationary space-times are considered first. In this case, the issue of extension is completely straightforward and the resulting theory is the natural generalization of the one in static space-times. General, non-stationary space-times are then considered. In this case the issue of extension is quite complicated and only a plausible extension is presented. Although the resulting framework is well-defined mathematically, the physical interpretation associated with it is rather unconventional. Merits and weaknesses of this framework are discussed. (author)

  16. On the differentiability of space-time

    International Nuclear Information System (INIS)

    Clarke, C.J.S.

    1977-01-01

    It is shown that the differentiability of a space-time is implied by that of its Riemann tensor, assuming a priori only boundedness of the first derivations of the metric. Consequently all the results on space-time singularities proved in earlier papers by the author hold true in C 2- space-times. (author)

  17. Sharp Efficiency for Vector Equilibrium Problems on Banach Spaces

    Directory of Open Access Journals (Sweden)

    Si-Huan Li

    2013-01-01

    Full Text Available The concept of sharp efficient solution for vector equilibrium problems on Banach spaces is proposed. Moreover, the Fermat rules for local efficient solutions of vector equilibrium problems are extended to the sharp efficient solutions by means of the Clarke generalized differentiation and the normal cone. As applications, some necessary optimality conditions and sufficient optimality conditions for local sharp efficient solutions of a vector optimization problem with an abstract constraint and a vector variational inequality are obtained, respectively.

  18. Forward modeling of gravity data using geostatistically generated subsurface density variations

    Science.gov (United States)

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  19. Labeling RDF Graphs for Linear Time and Space Querying

    Science.gov (United States)

    Furche, Tim; Weinzierl, Antonius; Bry, François

    Indices and data structures for web querying have mostly considered tree shaped data, reflecting the view of XML documents as tree-shaped. However, for RDF (and when querying ID/IDREF constraints in XML) data is indisputably graph-shaped. In this chapter, we first study existing indexing and labeling schemes for RDF and other graph datawith focus on support for efficient adjacency and reachability queries. For XML, labeling schemes are an important part of the widespread adoption of XML, in particular for mapping XML to existing (relational) database technology. However, the existing indexing and labeling schemes for RDF (and graph data in general) sacrifice one of the most attractive properties of XML labeling schemes, the constant time (and per-node space) test for adjacency (child) and reachability (descendant). In the second part, we introduce the first labeling scheme for RDF data that retains this property and thus achieves linear time and space processing of acyclic RDF queries on a significantly larger class of graphs than previous approaches (which are mostly limited to tree-shaped data). Finally, we show how this labeling scheme can be applied to (acyclic) SPARQL queries to obtain an evaluation algorithm with time and space complexity linear in the number of resources in the queried RDF graph.

  20. A geostatistical estimation of zinc grade in bore-core samples

    International Nuclear Information System (INIS)

    Starzec, A.

    1987-01-01

    Possibilities and preliminary results of geostatistical interpretation of the XRF determination of zinc in bore-core samples are considered. For the spherical model of the variogram the estimation variance of grade in a disk-shape sample (estimated from the grade on the circumference sample) is calculated. Variograms of zinc grade in core samples are presented and examples of the grade estimation are discussed. 4 refs., 7 figs., 1 tab. (author)

  1. Matter fields in curved space-time

    International Nuclear Information System (INIS)

    Viet, Nguyen Ai; Wali, Kameshwar C.

    2000-01-01

    We study the geometry of a two-sheeted space-time within the framework of non-commutative geometry. As a prelude to the Standard Model in curved space-time, we present a model of a left- and a right- chiral field living on the two sheeted-space time and construct the action functionals that describe their interactions

  2. The manifold model for space-time

    International Nuclear Information System (INIS)

    Heller, M.

    1981-01-01

    Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)

  3. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  4. Geostatistical simulations for radon indoor with a nested model including the housing factor.

    Science.gov (United States)

    Cafaro, C; Giovani, C; Garavaglia, M

    2016-01-01

    The radon prone areas definition is matter of many researches in radioecology, since radon is considered a leading cause of lung tumours, therefore the authorities ask for support to develop an appropriate sanitary prevention strategy. In this paper, we use geostatistical tools to elaborate a definition accounting for some of the available information about the dwellings. Co-kriging is the proper interpolator used in geostatistics to refine the predictions by using external covariates. In advance, co-kriging is not guaranteed to improve significantly the results obtained by applying the common lognormal kriging. Here, instead, such multivariate approach leads to reduce the cross-validation residual variance to an extent which is deemed as satisfying. Furthermore, with the application of Monte Carlo simulations, the paradigm provides a more conservative radon prone areas definition than the one previously made by lognormal kriging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. An Efficient Implicit FEM Scheme for Fractional-in-Space Reaction-Diffusion Equations

    KAUST Repository

    Burrage, Kevin

    2012-01-01

    Fractional differential equations are becoming increasingly used as a modelling tool for processes associated with anomalous diffusion or spatial heterogeneity. However, the presence of a fractional differential operator causes memory (time fractional) or nonlocality (space fractional) issues that impose a number of computational constraints. In this paper we develop efficient, scalable techniques for solving fractional-in-space reaction diffusion equations using the finite element method on both structured and unstructured grids via robust techniques for computing the fractional power of a matrix times a vector. Our approach is show-cased by solving the fractional Fisher and fractional Allen-Cahn reaction-diffusion equations in two and three spatial dimensions, and analyzing the speed of the traveling wave and size of the interface in terms of the fractional power of the underlying Laplacian operator. © 2012 Society for Industrial and Applied Mathematics.

  6. High-efficiency pump for space helium transfer. Final Technical Report

    International Nuclear Information System (INIS)

    Hasenbein, R.; Izenson, M.G.; Swift, W.L.; Sixsmith, H.

    1991-12-01

    A centrifugal pump was developed for the efficient and reliable transfer of liquid helium in space. The pump can be used to refill cryostats on orbiting satellites which use liquid helium for refrigeration at extremely low temperatures. The pump meets the head and flow requirements of on-orbit helium transfer: a flow rate of 800 L/hr at a head of 128 J/kg. The overall pump efficiency at the design point is 0.45. The design head and flow requirements are met with zero net positive suction head, which is the condition in an orbiting helium supply Dewar. The mass transfer efficiency calculated for a space transfer operation is 0.99. Steel ball bearings are used with gas fiber-reinforced teflon retainers to provide solid lubrication. These bearings have demonstrated the longest life in liquid helium endurance tests under simulated pumping conditions. Technology developed in the project also has application for liquid helium circulation in terrestrial facilities and for transfer of cryogenic rocket propellants in space

  7. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  8. MoisturEC: an R application for geostatistical estimation of moisture content from electrical conductivity data

    Science.gov (United States)

    Terry, N.; Day-Lewis, F. D.; Werkema, D. D.; Lane, J. W., Jr.

    2017-12-01

    Soil moisture is a critical parameter for agriculture, water supply, and management of landfills. Whereas direct data (as from TDR or soil moisture probes) provide localized point scale information, it is often more desirable to produce 2D and/or 3D estimates of soil moisture from noninvasive measurements. To this end, geophysical methods for indirectly assessing soil moisture have great potential, yet are limited in terms of quantitative interpretation due to uncertainty in petrophysical transformations and inherent limitations in resolution. Simple tools to produce soil moisture estimates from geophysical data are lacking. We present a new standalone program, MoisturEC, for estimating moisture content distributions from electrical conductivity data. The program uses an indicator kriging method within a geostatistical framework to incorporate hard data (as from moisture probes) and soft data (as from electrical resistivity imaging or electromagnetic induction) to produce estimates of moisture content and uncertainty. The program features data visualization and output options as well as a module for calibrating electrical conductivity with moisture content to improve estimates. The user-friendly program is written in R - a widely used, cross-platform, open source programming language that lends itself to further development and customization. We demonstrate use of the program with a numerical experiment as well as a controlled field irrigation experiment. Results produced from the combined geostatistical framework of MoisturEC show improved estimates of moisture content compared to those generated from individual datasets. This application provides a convenient and efficient means for integrating various data types and has broad utility to soil moisture monitoring in landfills, agriculture, and other problems.

  9. Chapter J: Issues and challenges in the application of geostatistics and spatial-data analysis to the characterization of sand-and-gravel resources

    Science.gov (United States)

    Hack, Daniel R.

    2005-01-01

    Sand-and-gravel (aggregate) resources are a critical component of the Nation's infrastructure, yet aggregate-mining technologies lag far behind those of metalliferous mining and other sectors. Deposit-evaluation and site-characterization methodologies are antiquated, and few serious studies of the potential applications of spatial-data analysis and geostatistics have been published. However, because of commodity usage and the necessary proximity of a mine to end use, aggregate-resource exploration and evaluation differ fundamentally from comparable activities for metalliferous ores. Acceptable practices, therefore, can reflect this cruder scale. The increasing use of computer technologies is colliding with the need for sand-and-gravel mines to modernize and improve their overall efficiency of exploration, mine planning, scheduling, automation, and other operations. The emergence of megaquarries in the 21st century will also be a contributing factor. Preliminary research into the practical applications of exploratory-data analysis (EDA) have been promising. For example, EDA was used to develop a linear-regression equation to forecast freeze-thaw durability from absorption values for Lower Paleozoic carbonate rocks mined for crushed aggregate from quarries in Oklahoma. Applications of EDA within a spatial context, a method of spatial-data analysis, have also been promising, as with the investigation of undeveloped sand-and-gravel resources in the sedimentary deposits of Pleistocene Lake Bonneville, Utah. Formal geostatistical investigations of sand-and-gravel deposits are quite rare, and the primary focus of those studies that have been completed is on the spatial characterization of deposit thickness and its subsequent effect on ore reserves. A thorough investigation of a gravel deposit in an active aggregate-mining area in central Essex, U.K., emphasized the problems inherent in the geostatistical characterization of particle-size-analysis data. Beyond such factors

  10. Space-Time and Architecture

    Science.gov (United States)

    Field, F.; Goodbun, J.; Watson, V.

    Architects have a role to play in interplanetary space that has barely yet been explored. The architectural community is largely unaware of this new territory, for which there is still no agreed method of practice. There is moreover a general confusion, in scientific and related fields, over what architects might actually do there today. Current extra-planetary designs generally fail to explore the dynamic and relational nature of space-time, and often reduce human habitation to a purely functional problem. This is compounded by a crisis over the representation (drawing) of space-time. The present work returns to first principles of architecture in order to realign them with current socio-economic and technological trends surrounding the space industry. What emerges is simultaneously the basis for an ecological space architecture, and the representational strategies necessary to draw it. We explore this approach through a work of design-based research that describes the construction of Ocean; a huge body of water formed by the collision of two asteroids at the Translunar Lagrange Point (L2), that would serve as a site for colonisation, and as a resource to fuel future missions. Ocean is an experimental model for extra-planetary space design and its representation, within the autonomous discipline of architecture.

  11. The topology of geodesically complete space-times

    International Nuclear Information System (INIS)

    Lee, C.W.

    1983-01-01

    Two theorems are given on the topology of geodesically complete space-times which satisfy the energy condition. Firstly, the condition that a compact embedded 3-manifold in space-time be dentless is defined in terms of causal structure. Then it is shown that a dentless 3-manifold must separate space-time, and that it must enclose a compact portion of space-time. Further, it is shown that if the dentless 3-manifold is homeomorphic to S 3 then the part of space-time that it encloses must be simply connected. (author)

  12. Philosophy of physics space and time

    CERN Document Server

    Maudlin, Tim

    2012-01-01

    This concise book introduces nonphysicists to the core philosophical issues surrounding the nature and structure of space and time, and is also an ideal resource for physicists interested in the conceptual foundations of space-time theory. Tim Maudlin's broad historical overview examines Aristotelian and Newtonian accounts of space and time, and traces how Galileo's conceptions of relativity and space-time led to Einstein's special and general theories of relativity. Maudlin explains special relativity using a geometrical approach, emphasizing intrinsic space-time structure rather than coordinate systems or reference frames. He gives readers enough detail about special relativity to solve concrete physical problems while presenting general relativity in a more qualitative way, with an informative discussion of the geometrization of gravity, the bending of light, and black holes. Additional topics include the Twins Paradox, the physical aspects of the Lorentz-FitzGerald contraction, the constancy of the speed...

  13. Real-time validation of receiver state information in optical space-time block code systems.

    Science.gov (United States)

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  14. Semiclassical expanding discrete space-times

    International Nuclear Information System (INIS)

    Cobb, W.K.; Smalley, L.L.

    1981-01-01

    Given the close ties between general relativity and geometry one might reasonably expect that quantum effects associated with gravitation might also be tied to the geometry of space-time, namely, to some sort of discreteness in space-time itself. In particular it is supposed that space-time consists of a discrete lattice of points rather than the usual continuum. Since astronomical evidence seems to suggest that the universe is expanding, the lattice must also expand. Some of the implications of such a model are that the proton should presently be stable, and the universe should be closed although the mechanism for closure is quantum mechanical. (author)

  15. High-Efficiency Reliable Stirling Generator for Space Exploration Missions, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA needs advanced power-conversion technologies to improve the efficiency and reliability of power conversion for space exploration missions. We propose to develop...

  16. Assessing the spatial distribution of Tuta absoluta (Lepidoptera: Gelechiidae) eggs in open-field tomato cultivation through geostatistical analysis.

    Science.gov (United States)

    Martins, Júlio C; Picanço, Marcelo C; Silva, Ricardo S; Gonring, Alfredo Hr; Galdino, Tarcísio Vs; Guedes, Raul Nc

    2018-01-01

    The spatial distribution of insects is due to the interaction between individuals and the environment. Knowledge about the within-field pattern of spatial distribution of a pest is critical to planning control tactics, developing efficient sampling plans, and predicting pest damage. The leaf miner Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae) is the main pest of tomato crops in several regions of the world. Despite the importance of this pest, the pattern of spatial distribution of T. absoluta on open-field tomato cultivation remains unknown. Therefore, this study aimed to characterize the spatial distribution of T. absoluta in 22 commercial open-field tomato cultivations with plants at the three phenological development stages by using geostatistical analysis. Geostatistical analysis revealed that there was strong evidence for spatially dependent (aggregated) T. absoluta eggs in 19 of the 22 sample tomato cultivations. The maps that were obtained demonstrated the aggregated structure of egg densities at the edges of the crops. Further, T. absoluta was found to accomplish egg dispersal along the rows more frequently than it does between rows. Our results indicate that the greatest egg densities of T. absoluta occur at the edges of tomato crops. These results are discussed in relation to the behavior of T. absoluta distribution within fields and in terms of their implications for improved sampling guidelines and precision targeting control methods that are essential for effective pest monitoring and management. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  17. Fermion systems in discrete space-time

    International Nuclear Information System (INIS)

    Finster, Felix

    2007-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure

  18. Fermion systems in discrete space-time

    Energy Technology Data Exchange (ETDEWEB)

    Finster, Felix [NWF I - Mathematik, Universitaet Regensburg, 93040 Regensburg (Germany)

    2007-05-15

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  19. Fermion Systems in Discrete Space-Time

    OpenAIRE

    Finster, Felix

    2006-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  20. Fermion systems in discrete space-time

    Science.gov (United States)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  1. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    Science.gov (United States)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  2. Geostatistics: a common link between medical geography, mathematical geology, and medical geology.

    Science.gov (United States)

    Goovaerts, P

    2014-08-01

    Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential 'causes' of disease, such as environmental exposure, diet and unhealthy behaviours, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentration across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level.

  3. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  4. Time-REferenced data Kriging (TREK): mapping hydrological statistics given their time of reference

    Science.gov (United States)

    Porcheron, Delphine; Leblois, Etienne; Sauquet, Eric

    2016-04-01

    A major issue in water sciences is to predict runoff parameters at ungauged sites. Estimates can be obtained by various methods. Among them, geostatistical approaches provide interpolation methods that consequently use explicit assumptions on the variable of interest. Geostatistical techniques have been applied to precipitation and temperature fields and later extended to estimate runoff features considered as basin-support variates along the river network (e.g. Gottschalk, 1993; Sauquet et al., 2000; Skoien et al., 2006; Gottschalk et al., 2011). To obtain robust estimations, the first step is to collect a relevant dataset. Sauquet et al. (2000) and Sauquet (2006) suggest including a large number of catchments with long and common observation periods to ensure both reliability and temporal consistency in runoff estimates. However most observation networks evolve with time. Several choices are thus possible to define an optimal reference period maximizing either spatial or temporal overlap. However, the constraints usually lead to discard a significant number of stations. Time-REferenced data Kriging method (TREK) has been developed to overcome this issue. Here is proposed a method of geostatistical estimation considering the temporal support over which a hydrological statistic has been estimated. This allows attenuating the loss of data previously caused by the application of a strict reference period. The time reference remains for the targeted map itself. The weights depend on the observation period of the data included in the dataset and how near this is to the target period. In this presentation, the concepts of TREK will be introduced and thereafter illustrated to map mean annual runoff in France. References Gottschalk, L., 1993, Correlation and covariance of runoff. Stochastic Hydrology and Hydraulics 7(2), 85-101. Sauquet, E., Gottschalk, L. and Leblois, E., 2000, Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation

  5. Possibility of extending space-time coordinates

    International Nuclear Information System (INIS)

    Wang Yongcheng.

    1993-11-01

    It has been shown that one coordinate system can describe a whole space-time region except some supersurfaces on which there are coordinate singularities. The conditions of extending a coordinate from real field to complex field are studied. It has been shown that many-valued coordinate transformations may help us to extend space-time regions and many-valued metric functions may make one coordinate region to describe more than one space-time regions. (author). 11 refs

  6. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting

  7. Some Peculiarities of Newton-Hooke Space-Times

    OpenAIRE

    Tian, Yu

    2011-01-01

    Newton-Hooke space-times are the non-relativistic limit of (anti-)de Sitter space-times. We investigate some peculiar facts about the Newton-Hooke space-times, among which the "extraordinary Newton-Hooke quantum mechanics" and the "anomalous Newton-Hooke space-times" are discussed in detail. Analysis on the Lagrangian/action formalism is performed in the discussion of the Newton-Hooke quantum mechanics, where the path integral point of view plays an important role, and the physically measurab...

  8. Tunneling time in space fractional quantum mechanics

    Science.gov (United States)

    Hasan, Mohammad; Mandal, Bhabani Prasad

    2018-02-01

    We calculate the time taken by a wave packet to travel through a classically forbidden region of space in space fractional quantum mechanics. We obtain the close form expression of tunneling time from a rectangular barrier by stationary phase method. We show that tunneling time depends upon the width b of the barrier for b → ∞ and therefore Hartman effect doesn't exist in space fractional quantum mechanics. Interestingly we found that the tunneling time monotonically reduces with increasing b. The tunneling time is smaller in space fractional quantum mechanics as compared to the case of standard quantum mechanics. We recover the Hartman effect of standard quantum mechanics as a special case of space fractional quantum mechanics.

  9. Finiteness principle and the concept of space-time

    International Nuclear Information System (INIS)

    Tati, T.

    1984-01-01

    It is shown that the non-space-time description can be given by a system of axioms under the postulate of a certain number of pre-supposed physical concepts in which space-time is not included. It is found that space-time is a compound concept of presupposed concepts of non-space-time description connected by an additional condition called 'space-time condition'. (L.C.) [pt

  10. Causal boundary for stably causal space-times

    International Nuclear Information System (INIS)

    Racz, I.

    1987-12-01

    The usual boundary constructions for space-times often yield an unsatisfactory boundary set. This problem is reviewed and a new solution is proposed. An explicit identification rule is given on the set of the ideal points of the space-time. This construction leads to a satisfactory boundary point set structure for stably causal space-times. The topological properties of the resulting causal boundary construction are examined. For the stably causal space-times each causal curve has a unique endpoint on the boundary set according to the extended Alexandrov topology. The extension of the space-time through the boundary is discussed. To describe the singularities the defined boundary sets have to be separated into two disjoint sets. (D.Gy.) 8 refs

  11. Stochastic quantization of geometrodynamic curved space-time

    International Nuclear Information System (INIS)

    Prugovecki, E.

    1981-01-01

    It is proposed that quantum rather than classical test particles be used in recent operational definitions of space-time. In the resulting quantum space-time the role of test particle trajectories is taken over by propagators. The introduced co-ordinate values are stochastic rather than deterministic, the afore-mentioned propagators providing probability amplitudes describing fluctuations of measured co-ordinates around their mean values. It is shown that, if a geometrodynamic point of view based on 3 + 1 foliations of space-time is adopted, self-consistent families of propagators for quantum test particles in free fall can be constructed. The resulting formalism for quantum space-time is outlined and the quantization of spatially flat Robertson-Walker space-times is provided as an illustration. (author)

  12. State Space Methods for Timed Petri Nets

    DEFF Research Database (Denmark)

    Christensen, Søren; Jensen, Kurt; Mailund, Thomas

    2001-01-01

    it possible to condense the usually infinite state space of a timed Petri net into a finite condensed state space without loosing analysis power. The second method supports on-the-fly verification of certain safety properties of timed systems. We discuss the application of the two methods in a number......We present two recently developed state space methods for timed Petri nets. The two methods reconciles state space methods and time concepts based on the introduction of a global clock and associating time stamps to tokens. The first method is based on an equivalence relation on states which makes...

  13. On Space-Time Resolution of Inflow Representations for Wind Turbine Loads Analysis

    Directory of Open Access Journals (Sweden)

    Lance Manuel

    2012-06-01

    Full Text Available Efficient spatial and temporal resolution of simulated inflow wind fields is important in order to represent wind turbine dynamics and derive load statistics for design. Using Fourier-based stochastic simulation of inflow turbulence, we first investigate loads for a utility-scale turbine in the neutral atmospheric boundary layer. Load statistics, spectra, and wavelet analysis representations for different space and time resolutions are compared. Next, large-eddy simulation (LES is employed with space-time resolutions, justified on the basis of the earlier stochastic simulations, to again derive turbine loads. Extreme and fatigue loads from the two approaches used in inflow field generation are compared. On the basis of simulation studies carried out for three different wind speeds in the turbine’s operating range, it is shown that inflow turbulence described using 10-meter spatial resolution and 1 Hz temporal resolution is adequate for assessing turbine loads. Such studies on the investigation of adequate filtering or resolution of inflow wind fields help to establish efficient strategies for LES and other physical or stochastic simulation needed in turbine loads studies.

  14. Quantum relativity theory and quantum space-time

    International Nuclear Information System (INIS)

    Banai, M.

    1984-01-01

    A quantum relativity theory formulated in terms of Davis' quantum relativity principle is outlined. The first task in this theory as in classical relativity theory is to model space-time, the arena of natural processes. It is shown that the quantum space-time models of Banai introduced in another paper is formulated in terms of Davis's quantum relativity. The recently proposed classical relativistic quantum theory of Prugovecki and his corresponding classical relativistic quantum model of space-time open the way to introduce, in a consistent way, the quantum space-time model (the quantum substitute of Minkowski space) of Banai proposed in the paper mentioned. The goal of quantum mechanics of quantum relativistic particles living in this model of space-time is to predict the rest mass system properties of classically relativistic (massive) quantum particles (''elementary particles''). The main new aspect of this quantum mechanics is that it provides a true mass eigenvalue problem, and that the excited mass states of quantum relativistic particles can be interpreted as elementary particles. The question of field theory over quantum relativistic model of space-time is also discussed. Finally it is suggested that ''quarks'' should be considered as quantum relativistic particles. (author)

  15. A space-efficient algorithm for local similarities.

    Science.gov (United States)

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  16. Balancing creativity and time efficiency in multi-team R&D projects: The alignment of formal and informal networks

    DEFF Research Database (Denmark)

    Kratzer, Jan; Gemuenden, Hans Georg; Lettl, Christopher

    2008-01-01

    and their effect on the challenge to balance project creativity and time efficiency. In order to analyse this issue data in two multi-team R&D projects in space industry are collected. There are two intriguing findings that are partly contradicting the state-of-the art knowledge. First, formally ascribed design...... with the team's creativity, whereas it negatively impacts the team's time efficiency....

  17. Improved Resolution Optical Time Stretch Imaging Based on High Efficiency In-Fiber Diffraction.

    Science.gov (United States)

    Wang, Guoqing; Yan, Zhijun; Yang, Lei; Zhang, Lin; Wang, Chao

    2018-01-12

    Most overlooked challenges in ultrafast optical time stretch imaging (OTSI) are sacrificed spatial resolution and higher optical loss. These challenges are originated from optical diffraction devices used in OTSI, which encode image into spectra of ultrashort optical pulses. Conventional free-space diffraction gratings, as widely used in existing OTSI systems, suffer from several inherent drawbacks: limited diffraction efficiency in a non-Littrow configuration due to inherent zeroth-order reflection, high coupling loss between free-space gratings and optical fibers, bulky footprint, and more importantly, sacrificed imaging resolution due to non-full-aperture illumination for individual wavelengths. Here we report resolution-improved and diffraction-efficient OTSI using in-fiber diffraction for the first time to our knowledge. The key to overcome the existing challenges is a 45° tilted fiber grating (TFG), which serves as a compact in-fiber diffraction device offering improved diffraction efficiency (up to 97%), inherent compatibility with optical fibers, and improved imaging resolution owning to almost full-aperture illumination for all illumination wavelengths. 50 million frames per second imaging of fast moving object at 46 m/s with improved imaging resolution has been demonstrated. This conceptually new in-fiber diffraction design opens the way towards cost-effective, compact and high-resolution OTSI systems for image-based high-throughput detection and measurement.

  18. The equivalence of perfect fluid space-times and viscous magnetohydrodynamic space-times in general relativity

    International Nuclear Information System (INIS)

    Tupper, B.O.J.

    1983-01-01

    The work of a previous article is extended to show that space-times which are the exact solutions of the field equations for a perfect fluid also may be exact solutions of the field equations for a viscous magnetohydrodynamic fluid. Conditions are found for this equivalence to exist and viscous magnetohydrodynamic solutions are found for a number of known perfect fluid space-times. (author)

  19. A Reparametrization Approach for Dynamic Space-Time Models

    OpenAIRE

    Lee, Hyeyoung; Ghosh, Sujit K.

    2008-01-01

    Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to...

  20. Geostatistical Analysis of Mesoscale Spatial Variability and Error in SeaWiFS and MODIS/Aqua Global Ocean Color Data

    Science.gov (United States)

    Glover, David M.; Doney, Scott C.; Oestreich, William K.; Tullo, Alisdair W.

    2018-01-01

    Mesoscale (10-300 km, weeks to months) physical variability strongly modulates the structure and dynamics of planktonic marine ecosystems via both turbulent advection and environmental impacts upon biological rates. Using structure function analysis (geostatistics), we quantify the mesoscale biological signals within global 13 year SeaWiFS (1998-2010) and 8 year MODIS/Aqua (2003-2010) chlorophyll a ocean color data (Level-3, 9 km resolution). We present geographical distributions, seasonality, and interannual variability of key geostatistical parameters: unresolved variability or noise, resolved variability, and spatial range. Resolved variability is nearly identical for both instruments, indicating that geostatistical techniques isolate a robust measure of biophysical mesoscale variability largely independent of measurement platform. In contrast, unresolved variability in MODIS/Aqua is substantially lower than in SeaWiFS, especially in oligotrophic waters where previous analysis identified a problem for the SeaWiFS instrument likely due to sensor noise characteristics. Both records exhibit a statistically significant relationship between resolved mesoscale variability and the low-pass filtered chlorophyll field horizontal gradient magnitude, consistent with physical stirring acting on large-scale gradient as an important factor supporting observed mesoscale variability. Comparable horizontal length scales for variability are found from tracer-based scaling arguments and geostatistical decorrelation. Regional variations between these length scales may reflect scale dependence of biological mechanisms that also create variability directly at the mesoscale, for example, enhanced net phytoplankton growth in coastal and frontal upwelling and convective mixing regions. Global estimates of mesoscale biophysical variability provide an improved basis for evaluating higher resolution, coupled ecosystem-ocean general circulation models, and data assimilation.

  1. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-01-01

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  2. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering

    KAUST Repository

    Hadwiger, Markus

    2017-08-28

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  3. Temperature and entropy of Schwarzschild-de Sitter space-time

    International Nuclear Information System (INIS)

    Shankaranarayanan, S.

    2003-01-01

    In the light of recent interest in quantum gravity in de Sitter space, we investigate semiclassical aspects of four-dimensional Schwarzschild-de Sitter space-time using the method of complex paths. The standard semiclassical techniques (such as Bogoliubov coefficients and Euclidean field theory) have been useful to study quantum effects in space-times with single horizons; however, none of these approaches seem to work for Schwarzschild-de Sitter space-time or, in general, for space-times with multiple horizons. We extend the method of complex paths to space-times with multiple horizons and obtain the spectrum of particles produced in these space-times. We show that the temperature of radiation in these space-times is proportional to the effective surface gravity--the inverse harmonic sum of surface gravity of each horizon. For the Schwarzschild-de Sitter space-time, we apply the method of complex paths to three different coordinate systems--spherically symmetric, Painleve, and Lemaitre. We show that the equilibrium temperature in Schwarzschild-de Sitter space-time is the harmonic mean of cosmological and event horizon temperatures. We obtain Bogoliubov coefficients for space-times with multiple horizons by analyzing the mode functions of the quantum fields near the horizons. We propose a new definition of entropy for space-times with multiple horizons, analogous to the entropic definition for space-times with a single horizon. We define entropy for these space-times to be inversely proportional to the square of the effective surface gravity. We show that this definition of entropy for Schwarzschild-de Sitter space-time satisfies the D-bound conjecture

  4. Classification of Animal Movement Behavior through Residence in Space and Time.

    Science.gov (United States)

    Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R

    2017-01-01

    Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can

  5. Space-time design of the public city

    CERN Document Server

    Thomaier, Susanne; Könecke, Benjamin; Zedda, Roberto; Stabilini, Stefano

    2013-01-01

    Time has become an increasingly important topic in urban studies and urban planning. The spatial-temporal interplay is not only of relevance for the theory of urban development and urban politics, but also for urban planning and governance. The space-time approach focuses on the human being with its various habits and routines in the city. Understanding and taking those habits into account in urban planning and public policies offers a new way to improve the quality of life in our cities. Adapting the supply and accessibility of public spaces and services to the inhabitants’ space-time needs calls for an integrated approach to the physical design of urban space and to the organization of cities. In the last two decades the body of practical and theoretical work on urban space-time topics has grown substantially. The book offers a state of the art overview of the theoretical reasoning, the development of new analytical tools, and practical experience of the space-time design of public cities in major Europea...

  6. Time and Space in Digital Game Storytelling

    Directory of Open Access Journals (Sweden)

    Huaxin Wei

    2010-01-01

    Full Text Available The design and representation of time and space are important in any narrative form. Not surprisingly there is an extensive literature on specific considerations of space or time in game design. However, there is less attention to more systematic analyses that examine both of these key factors—including their dynamic interrelationship within game storytelling. This paper adapts critical frameworks of narrative space and narrative time drawn from other media and demonstrates their application in the understanding of game narratives. In order to do this we incorporate fundamental concepts from the field of game studies to build a game-specific framework for analyzing the design of narrative time and narrative space. The paper applies this framework against a case analysis in order to demonstrate its operation and utility. This process grounds the understanding of game narrative space and narrative time in broader traditions of narrative discourse and analysis.

  7. The space-time of dark-matter

    International Nuclear Information System (INIS)

    Dey, Dipanjan

    2015-01-01

    Dark-matter is a hypothetical matter which can't be seen but around 27% of our universe is made of it. Its distribution, evolution from early stage of our universe to present stage, its particle constituents all these are great unsolved mysteries of modern Cosmology and Astrophysics. In this talk I will introduce a special kind of space-time which is known as Bertrand Space-time (BST). I will show this space-time interestingly shows some dark-matter properties like- flat velocity curve, density profile of Dark-matter, total mass of Dark matter-halo, gravitational lensing etc, for that reason we consider BST is seeded by Dark-matter or it is a space-time of Dark-matter. At last I will show using modified gravity formalism the behaviour of the equation of state parameter of Dark-matter and the behaviour of the Newton's gravitational constant in the vicinity of the singularity. (author)

  8. From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives

    Science.gov (United States)

    Finster, Felix

    This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.

  9. Quantum space-time: a review

    International Nuclear Information System (INIS)

    Namsrai, K.

    1988-01-01

    The review presents systematically the results of studies which develop an idea of quantum properties of space-time in the microworld or near exotic objects (black holes, magnetic monopoles and others). On the basis of this idea motion equations of nonrelativistic and relativistic particles are studied. It is shown that introducing concept of quantum space-time at small distances (or near superdense matter) leads to an additional force giving rise to appearance of spiral-like behaviour of a particle along its classical trajectory. Given method is generalized to nonrelativistic quantum mechanics and to motion of a particle in gravitational force. In the latter case, there appears to be an antigravitational effect in the motion of a particle leading to different value of free-fall time (at least for gravitational force of exotic objects) for particles with different masses. Gravitational consequences of quantum space-time and tensor structures of physical quantities are investigated in detail. From experimental data on testing relativity and anisotropy of inertia estimation L ≤ 10 -22 cm on the value of the fundamental length is obtained. (author)

  10. A Note on the Problem of Proper Time in Weyl Space-Time

    Science.gov (United States)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-02-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.

  11. Time takes space: selective effects of multitasking on concurrent spatial processing.

    Science.gov (United States)

    Mäntylä, Timo; Coni, Valentina; Kubik, Veit; Todorov, Ivo; Del Missier, Fabio

    2017-08-01

    Many everyday activities require coordination and monitoring of complex relations of future goals and deadlines. Cognitive offloading may provide an efficient strategy for reducing control demands by representing future goals and deadlines as a pattern of spatial relations. We tested the hypothesis that multiple-task monitoring involves time-to-space transformational processes, and that these spatial effects are selective with greater demands on coordinate (metric) than categorical (nonmetric) spatial relation processing. Participants completed a multitasking session in which they monitored four series of deadlines, running on different time scales, while making concurrent coordinate or categorical spatial judgments. We expected and found that multitasking taxes concurrent coordinate, but not categorical, spatial processing. Furthermore, males showed a better multitasking performance than females. These findings provide novel experimental evidence for the hypothesis that efficient multitasking involves metric relational processing.

  12. Axiomatics of uniform space-time models

    International Nuclear Information System (INIS)

    Levichev, A.V.

    1983-01-01

    The mathematical statement of space-time axiomatics of the special theory of relativity is given; it postulates that the space-time M is the binding single boundary Hausedorf local-compact four-dimensional topological space with the given order. The theorem is proved: if the invariant order in the four-dimensional group M is given by the semi-group P, which contingency K contains inner points , then M is commutative. The analogous theorem is correct for the group of two and three dimensionalities

  13. Black Hole Space-time In Dark Matter Halo

    OpenAIRE

    Xu, Zhaoyi; Hou, Xian; Gong, Xiaobo; Wang, Jiancheng

    2018-01-01

    For the first time, we obtain the analytical form of black hole space-time metric in dark matter halo for the stationary situation. Using the relation between the rotation velocity (in the equatorial plane) and the spherical symmetric space-time metric coefficient, we obtain the space-time metric for pure dark matter. By considering the dark matter halo in spherical symmetric space-time as part of the energy-momentum tensors in the Einstein field equation, we then obtain the spherical symmetr...

  14. Energy-efficient fault tolerance in multiprocessor real-time systems

    Science.gov (United States)

    Guo, Yifeng

    The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is

  15. Relativistic positioning in Schwarzschild space-time

    International Nuclear Information System (INIS)

    Puchades, Neus; Sáez, Diego

    2015-01-01

    In the Schwarzschild space-time created by an idealized static spherically symmetric Earth, two approaches -based on relativistic positioning- may be used to estimate the user position from the proper times broadcast by four satellites. In the first approach, satellites move in the Schwarzschild space-time and the photons emitted by the satellites follow null geodesics of the Minkowski space-time asymptotic to the Schwarzschild geometry. This assumption leads to positioning errors since the photon world lines are not geodesics of any Minkowski geometry. In the second approach -the most coherent one- satellites and photons move in the Schwarzschild space-time. This approach is a first order one in the dimensionless parameter GM/R (with the speed of light c=1). The two approaches give different inertial coordinates for a given user. The differences are estimated and appropriately represented for users located inside a great region surrounding Earth. The resulting values (errors) are small enough to justify the use of the first approach, which is the simplest and the most manageable one. The satellite evolution mimics that of the GALILEO global navigation satellite system. (paper)

  16. Efficient determination of the Markovian time-evolution towards a steady-state of a complex open quantum system

    Science.gov (United States)

    Jonsson, Thorsteinn H.; Manolescu, Andrei; Goan, Hsi-Sheng; Abdullah, Nzar Rauf; Sitek, Anna; Tang, Chi-Shung; Gudmundsson, Vidar

    2017-11-01

    Master equations are commonly used to describe time evolution of open systems. We introduce a general computationally efficient method for calculating a Markovian solution of the Nakajima-Zwanzig generalized master equation. We do so for a time-dependent transport of interacting electrons through a complex nano scale system in a photon cavity. The central system, described by 120 many-body states in a Fock space, is weakly coupled to the external leads. The efficiency of the approach allows us to place the bias window defined by the external leads high into the many-body spectrum of the cavity photon-dressed states of the central system revealing a cascade of intermediate transitions as the system relaxes to a steady state. The very diverse relaxation times present in the open system, reflecting radiative or non-radiative transitions, require information about the time evolution through many orders of magnitude. In our approach, the generalized master equation is mapped from a many-body Fock space of states to a Liouville space of transitions. We show that this results in a linear equation which is solved exactly through an eigenvalue analysis, which supplies information on the steady state and the time evolution of the system.

  17. Introducing the Dimensional Continuous Space-Time Theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2013-01-01

    This article is an introduction to a new theory. The name of the theory is justified by the dimensional description of the continuous space-time of the matter, energy and empty space, that gathers all the real things that exists in the universe. The theory presents itself as the consolidation of the classical, quantum and relativity theories. A basic equation that describes the formation of the Universe, relating time, space, matter, energy and movement, is deduced. The four fundamentals physics constants, light speed in empty space, gravitational constant, Boltzmann's constant and Planck's constant and also the fundamentals particles mass, the electrical charges, the energies, the empty space and time are also obtained from this basic equation. This theory provides a new vision of the Big-Bang and how the galaxies, stars, black holes and planets were formed. Based on it, is possible to have a perfect comprehension of the duality between wave-particle, which is an intrinsic characteristic of the matter and energy. It will be possible to comprehend the formation of orbitals and get the equationing of atomics orbits. It presents a singular comprehension of the mass relativity, length and time. It is demonstrated that the continuous space-time is tridimensional, inelastic and temporally instantaneous, eliminating the possibility of spatial fold, slot space, worm hole, time travels and parallel universes. It is shown that many concepts, like dark matter and strong forces, that hypothetically keep the cohesion of the atomics nucleons, are without sense.

  18. Space-time and matter in 'prephysics'

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1985-05-01

    Many fundamental questions concerning the space-time and matter are asked and answered in ''prephysics'', a new line of physics (or philosophy but not metaphysics). They include the following: 1) ''Why is our space-time of 4 dimensions.'', 2) ''What is the ultimate form of matter.'' and 3) ''How was our universe created.''. (author)

  19. Deep fracturing of granite bodies. Literature survey, geostructural and geostatistic investigations

    International Nuclear Information System (INIS)

    Bles, J.L.; Blanchin, R.

    1986-01-01

    This report deals with investigations about deep fracturing of granite bodies, which were performed within two cost-sharing contracts between the Commission of the European Communities, the Commissariat a l'Energie Atomique and the Bureau de Recherches Geologiques et Minieres. The aim of this work was to study the evolution of fracturing in granite from the surface to larger depths, so that guidelines can be identified in order to extrapolate, at depth, the data obtained from surface investigations. These guidelines could eventually be used for feasibility studies about radioactive waste disposal. The results of structural and geostatistic investigations about the St. Sylvestre granite, as well as the literature survey about fractures encountered in two long Alpine galleries (Mont-Blanc tunnel and Arc-Isere water gallery), in the 1000 m deep borehole at Auriat, and in the Bassies granite body (Pyrenees) are presented. These results show that, for radioactive waste disposal feasibility studies: 1. The deep state of fracturing in a granite body can be estimated from results obtained at the surface; 2. Studying only the large fault network would be insufficient, both for surface investigations and for studies in deep boreholes and/or in underground galleries; 3. It is necessary to study orientations and frequencies of small fractures, so that structural mapping and statistical/geostatistical methods can be used in order to identify zones of higher and lower fracturing

  20. On static and radiative space-times

    International Nuclear Information System (INIS)

    Friedrich, H.

    1988-01-01

    The conformal constraint equations on space-like hypersurfaces are discussed near points which represent either time-like or spatial infinity for an asymptotically flat solution of Einstein's vacuum field equations. In the case of time-like infinity a certain 'radiativity condition' is derived which must be satisfied by the data at that point. The case of space-like infinity is analysed in detail for static space-times with non-vanishing mass. It is shown that the conformal structure implied here on a slice of constant Killing time, which extends analytically through infinity, satisfies at spatial infinity the radiativity condition. Thus to any static solution exists a certain 'radiative solution' which has a smooth structure at past null infinity and is regular at past time-like infinity. A characterization of these solutions by their 'free data' is given and non-symmetry properties are discussed. (orig.)

  1. The Dirac equation in the Lobachevsky space-time

    International Nuclear Information System (INIS)

    Paramonov, D.V.; Paramonova, N.N.; Shavokhina, N.S.

    2000-01-01

    The product of the Lobachevsky space and the time axis is termed the Lobachevsky space-time. The Lobachevsky space is considered as a hyperboloid's sheet in the four-dimensional pseudo-Euclidean space. The Dirac-Fock-Ivanenko equation is reduced to the Dirac equation in two special forms by passing from Lame basis in the Lobachevsky space to the Cartesian basis in the enveloping pseudo-Euclidean space

  2. Space-Efficient Re-Pair Compression

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Prezza, Nicola

    2017-01-01

    Re-Pair [5] is an effective grammar-based compression scheme achieving strong compression rates in practice. Let n, σ, and d be the text length, alphabet size, and dictionary size of the final grammar, respectively. In their original paper, the authors show how to compute the Re-Pair grammar...... in expected linear time and 5n + 4σ2 + 4d + √n words of working space on top of the text. In this work, we propose two algorithms improving on the space of their original solution. Our model assumes a memory word of [log2 n] bits and a re-writable input text composed by n such words. Our first algorithm runs...

  3. About the coordinate time for photons in Lifshitz space-times

    International Nuclear Information System (INIS)

    Villanueva, J.R.; Vasquez, Yerko

    2013-01-01

    In this paper we studied the behavior of radial photons from the point of view of the coordinate time in (asymptotically) Lifshitz space-times, and we found a generalization to the result reported in previous works by Cruz et al. (Eur. Phys. J. C 73:7, 2013), Olivares et al. (Astrophys. Space Sci. 347:83-89, 2013), and Olivares et al. arXiv:1306.5285. We demonstrate that all asymptotically Lifshitz space-times characterized by a lapse function f(r) which tends to one when r→∞, present the same behavior, in the sense that an external observer will see that photons arrive at spatial infinity in a finite coordinate time. Also, we show that radial photons in the proper system cannot determine the presence of the black hole in the region r + < r<∞, because the proper time as a result is independent of the lapse function f(r). (orig.)

  4. Effective use of multibeam antenna and space-time multiple access technology in modern mobile communication systems

    OpenAIRE

    Moskalets, N. V.

    2015-01-01

    A possibility for efficient use of radio-frequency spectrum and of corresponding increase in productivity of mobile communication system with space-time multiple access obtained by use of multibeam antenna of base station is considered.

  5. The Space-Time Topography of English Speakers

    Science.gov (United States)

    Duman, Steve

    2016-01-01

    English speakers talk and think about Time in terms of physical space. The past is behind us, and the future is in front of us. In this way, we "map" space onto Time. This dissertation addresses the specificity of this physical space, or its topography. Inspired by languages like Yupno (Nunez, et al., 2012) and Bamileke-Dschang (Hyman,…

  6. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  7. Ore reserve evalution, through geostatistical methods, in sector C-09, Pocos de Caldas, MG-Brazil

    International Nuclear Information System (INIS)

    Guerra, P.A.G.; Censi, A.C.; Marques, J.P.M.; Huijbregts, Ch.

    1978-01-01

    In sector C-09, Pocos de Caldas in the state of Minas Gerais, geostatistical techniques have been used to evaluate the tonnage of U 3 O 8 and associated minerals and to delimit ore from sterile areas. The calculation of reserve was based on borehole information including the results of chemical and/or radiometric analysis. Two-and three dimensional evalutions were made following the existing geological models. Initially, the evaluation was based on chemical analysis using the more classical geostatistical technique of kriging. This was followed by a second evaluation using the more recent technique of co-kriging which permited the incorporation of radiometric information in the calculations. The correlation between ore grade and radiometric was studied using the method of cross-covariance. Following restrictions imposed by mining considerations, a probabilistic selection was made of blocks of appropriate dimensions so as to evaluate the grade tonnage curve for each panel. (Author) [pt

  8. Statistical geometry and space-time

    International Nuclear Information System (INIS)

    Grauert, H.

    1976-01-01

    In this paper I try to construct a mathematical tool by which the full structure of Lorentz geometry to space time can be given, but beyond that the background - to speak pictorially - the subsoil for electromagnetic and matter waves, too. The tool could be useful to describe the connections between various particles, electromagnetism and gravity and to compute observables which were not theoretically related, up to now. Moreover, the tool is simpler than the Riemann tensor: it consists just of a set S of line segments in space time, briefly speaking. (orig.) [de

  9. Spinor Field Nonlinearity and Space-Time Geometry

    Science.gov (United States)

    Saha, Bijan

    2018-03-01

    Within the scope of Bianchi type VI,VI0,V, III, I, LRSBI and FRW cosmological models we have studied the role of nonlinear spinor field on the evolution of the Universe and the spinor field itself. It was found that due to the presence of non-trivial non-diagonal components of the energy-momentum tensor of the spinor field in the anisotropic space-time, there occur some severe restrictions both on the metric functions and on the components of the spinor field. In this report we have considered a polynomial nonlinearity which is a function of invariants constructed from the bilinear spinor forms. It is found that in case of a Bianchi type-VI space-time, depending of the sign of self-coupling constants, the model allows either late time acceleration or oscillatory mode of evolution. In case of a Bianchi VI 0 type space-time due to the specific behavior of the spinor field we have two different scenarios. In one case the invariants constructed from bilinear spinor forms become trivial, thus giving rise to a massless and linear spinor field Lagrangian. This case is equivalent to the vacuum solution of the Bianchi VI 0 type space-time. The second case allows non-vanishing massive and nonlinear terms and depending on the sign of coupling constants gives rise to accelerating mode of expansion or the one that after obtaining some maximum value contracts and ends in big crunch, consequently generating space-time singularity. In case of a Bianchi type-V model there occur two possibilities. In one case we found that the metric functions are similar to each other. In this case the Universe expands with acceleration if the self-coupling constant is taken to be a positive one, whereas a negative coupling constant gives rise to a cyclic or periodic solution. In the second case the spinor mass and the spinor field nonlinearity vanish and the Universe expands linearly in time. In case of a Bianchi type-III model the space-time remains locally rotationally symmetric all the time

  10. Feynman propagator and space-time transformation technique

    International Nuclear Information System (INIS)

    Nassar, A.B.

    1987-01-01

    We evaluate the exact propagator for the time-dependent two-dimensional charged harmonic oscillator in a time-varying magnetic field, by taking direct recourse to the corresponding Schroedinger equation. Through the usage of an appropriate space-time transformation, we show that such a propagator can be obtained from the free propagator in the new space-time coordinate system. (orig.)

  11. A variable timestep generalized Runge-Kutta method for the numerical integration of the space-time diffusion equations

    International Nuclear Information System (INIS)

    Aviles, B.N.; Sutton, T.M.; Kelly, D.J. III.

    1991-09-01

    A generalized Runge-Kutta method has been employed in the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic timestep control. The efficiency of the Runge-Kutta method is enhanced by a block-factorization technique that exploits the sparse structure of the matrix system resulting from the space and energy discretized form of the time-dependent neutron diffusion equations. Preliminary numerical evaluation using a one-dimensional finite difference code shows the sparse matrix implementation of the generalized Runge-Kutta method to be highly accurate and efficient when compared to an optimized iterative theta method. 12 refs., 5 figs., 4 tabs

  12. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  13. Space-Time Disarray and Visual Awareness

    Directory of Open Access Journals (Sweden)

    Jan Koenderink

    2012-04-01

    Full Text Available Local space-time scrambling of optical data leads to violent jerks and dislocations. On masking these, visual awareness of the scene becomes cohesive, with dislocations discounted as amodally occluding foreground. Such cohesive space-time of awareness is technically illusory because ground truth is jumbled whereas awareness is coherent. Apparently the visual field is a construction rather than a (veridical perception.

  14. About the coordinate time for photons in Lifshitz space-times

    Energy Technology Data Exchange (ETDEWEB)

    Villanueva, J.R. [Universidad de Valparaiso, Departamento de Fisica y Astronomia, Facultad de Ciencias, Valparaiso (Chile); Centro de Astrofisica de Valparaiso, Valparaiso (Chile); Vasquez, Yerko [Universidad de La Frontera, Departamento de Ciencias Fisicas, Facultad de Ingenieria, Ciencias y Administracion, Temuco (Chile); Universidad de La Serena, Departamento de Fisicas, Facultad de Ciencias, La Serena (Chile)

    2013-10-15

    In this paper we studied the behavior of radial photons from the point of view of the coordinate time in (asymptotically) Lifshitz space-times, and we found a generalization to the result reported in previous works by Cruz et al. (Eur. Phys. J. C 73:7, 2013), Olivares et al. (Astrophys. Space Sci. 347:83-89, 2013), and Olivares et al. arXiv:1306.5285. We demonstrate that all asymptotically Lifshitz space-times characterized by a lapse function f(r) which tends to one when r{yields}{infinity}, present the same behavior, in the sense that an external observer will see that photons arrive at spatial infinity in a finite coordinate time. Also, we show that radial photons in the proper system cannot determine the presence of the black hole in the region r{sub +}time as a result is independent of the lapse function f(r). (orig.)

  15. Quantum space-time and gravitational consequences

    International Nuclear Information System (INIS)

    Namsrai, K.

    1986-01-01

    Relativistic particle dynamics and basic physical quantities for the general theory of gravity are reconstructed from a quantum space-time point of view. An additional force caused by quantum space-time appears in the equation of particle motion, giving rise to a reformulation of the equivalence principle up to values of O(L 2 ), where L is the fundamental length. It turns out that quantum space-time leads to quantization of gravity, i.e. the metric tensor g/sub uv/ (/ZETA/) becomes operator-valued and is not commutative at different points x/sup micro/ and y/sup micro/ in usual space-time on a large scale, and its commutator depending on the ''vielbein'' field (gaugelike graviton field) is proportional to L 2 multiplied by a translationinvariant wave function propagated between points x/sup micro/ and y/sup micro/. In the given scheme, there appears to be an antigravitational effect in the motion of a particle in the gravitational force. This effect depends on the value of particle mass; when a particle is heavy its free-fall time is long compared to that for a light-weight particle. The problem of the change of time scale and the anisotropy of inertia are discussed. From experimental data from testing of the latter effect it follows that L ≤ 10 -22 cm

  16. The Texas space flight liability act and efficient regulation for the private commercial space flight era

    Science.gov (United States)

    Johnson, Christopher D.

    2013-12-01

    In the spring of 2011, the American state of Texas passed into law an act limiting the liability of commercial space flight entities. Under it, those companies would not be liable for space flight participant injuries, except in cases of intentional injury or injury proximately caused by the company's gross negligence. An analysis within the framework of international and national space law, but especially informed by the academic discipline of law and economics, discusses the incentives of all relevant parties and attempts to understand whether the law is economically "efficient" (allocating resources so as to yield maximum utility), and suited to further the development of the fledgling commercial suborbital tourism industry. Insights into the Texas law are applicable to other states hoping to foster commercial space tourism and considering space tourism related legislation.

  17. Space time problems and applications

    DEFF Research Database (Denmark)

    Dethlefsen, Claus

    models, cubic spline models and structural time series models. The development of state space theory has interacted with the development of other statistical disciplines.   In the first part of the Thesis, we present the theory of state space models, including Gaussian state space models, approximative...... analysis of non-Gaussian models, simulation based techniques and model diagnostics.   The second part of the Thesis considers Markov random field models. These are spatial models applicable in e.g. disease mapping and in agricultural experiments. Recently, the Gaussian Markov random field models were...... techniques with importance sampling.   The third part of the Thesis contains applications of the theory. First, a univariate time series of count data is analysed. Then, a spatial model is used to compare wheat yields. Weed count data in connection with a project in precision farming is analysed using...

  18. High Efficiency Quantum Dot III-V Multijunction Solar Cell for Space Power, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We are proposing to utilize quantum dots to develop a super high-efficiency multijunction III-V solar cell for space. In metamorphic triple junction space solar...

  19. Evaluation of stationary and non-stationary geostatistical models for inferring hydraulic conductivity values at Aespoe

    International Nuclear Information System (INIS)

    La Pointe, P.R.

    1994-11-01

    This report describes the comparison of stationary and non-stationary geostatistical models for the purpose of inferring block-scale hydraulic conductivity values from packer tests at Aespoe. The comparison between models is made through the evaluation of cross-validation statistics for three experimental designs. The first experiment consisted of a 'Delete-1' test previously used at Finnsjoen. The second test consisted of 'Delete-10%' and the third test was a 'Delete-50%' test. Preliminary data analysis showed that the 3 m and 30 m packer test data can be treated as a sample from a single population for the purposes of geostatistical analyses. Analysis of the 3 m data does not indicate that there are any systematic statistical changes with depth, rock type, fracture zone vs non-fracture zone or other mappable factor. Directional variograms are ambiguous to interpret due to the clustered nature of the data, but do not show any obvious anisotropy that should be accounted for in geostatistical analysis. Stationary analysis suggested that there exists a sizeable spatially uncorrelated component ('Nugget Effect') in the 3 m data, on the order of 60% of the observed variance for the various models fitted. Four different nested models were automatically fit to the data. Results for all models in terms of cross-validation statistics were very similar for the first set of validation tests. Non-stationary analysis established that both the order of drift and the order of the intrinsic random functions is low. This study also suggests that conventional cross-validation studies and automatic variogram fitting are not necessarily evaluating how well a model will infer block scale hydraulic conductivity values. 20 refs, 20 figs, 14 tabs

  20. Space-time structure

    CERN Document Server

    Schrödinger, Erwin

    1985-01-01

    In response to repeated requests this classic book on space-time structure by Professor Erwin Schrödinger is now available in the Cambridge Science Classics series. First published in 1950, and reprinted in 1954 and 1960, this lucid and profound exposition of Einstein's 1915 theory of gravitation still provides valuable reading for students and research workers in the field.

  1. Collision-free gases in spatially homogeneous space-times

    International Nuclear Information System (INIS)

    Maartens, R.; Maharaj, S.D.

    1985-01-01

    The kinematical and dynamical properties of one-component collision-free gases in spatially homogeneous, locally rotationally symmetric (LRS) space-times are analyzed. Following Ray and Zimmerman [Nuovo Cimento B 42, 183 (1977)], it is assumed that the distribution function f of the gas inherits the symmetry of space-time, in order to construct solutions of Liouville's equation. The redundancy of their further assumption that f be based on Killing vector constants of the motion is shown. The Ray and Zimmerman results for Kantowski--Sachs space-time are extended to all spatially homogeneous LRS space-times. It is shown that in all these space-times the kinematic average four-velocity u/sup i/ can be tilted relative to the homogeneous hypersurfaces. This differs from the perfect fluid case, in which only one space-time admits tilted u/sup i/, as shown by King and Ellis [Commun. Math. Phys. 31, 209 (1973)]. As a consequence, it is shown that all space-times admit nonzero acceleration and heat flow, while a subclass admits nonzero vorticity. The stress π/sub i/j is proportional to the shear sigma/sub i/j by virtue of the invariance of the distribution function. The evolution of tilt and the existence of perfect fluid solutions is also discussed

  2. Space-Time Turbo Trellis Coded Modulation for Wireless Data Communications

    Directory of Open Access Journals (Sweden)

    Welly Firmanto

    2002-05-01

    Full Text Available This paper presents the design of space-time turbo trellis coded modulation (ST turbo TCM for improving the bandwidth efficiency and the reliability of future wireless data networks. We present new recursive space-time trellis coded modulation (STTC which outperform feedforward STTC proposed in by Tarokh et al. (1998 and Baro et al. (2000 on slow and fast fading channels. A substantial improvement in performance can be obtained by constructing ST turbo TCM which consists of concatenated recursive STTC, decoded by iterative decoding algorithm. The proposed recursive STTC are used as constituent codes in this scheme. They have been designed to satisfy the design criteria for STTC on slow and fast fading channels, derived for systems with the product of transmit and receive antennas larger than 3. The proposed ST turbo TCM significantly outperforms the best known STTC on both slow and fast fading channels. The capacity of this scheme on fast fading channels is less than 3 dB away from the theoretical capacity bound for multi-input multi-output (MIMO channels.

  3. Pavel Florensky on space and time

    Directory of Open Access Journals (Sweden)

    Case, Michael

    2015-01-01

    Full Text Available An investigation of the views on space and time of the Russian polymath Pavel Florensky (1882-1937. After a brief account of his life, I study Florensky’s conception of time in The Meaning of Idealism (1914, where he first confronts Einstein’s theory of special relativity, comparing it to Plato’s metaphor of the Cave and Goethe’s myth of the Mothers. Later, in his Analysis of spatiality and time, Florensky speaks of a person’s biography as a four-dimensional unity, in which the temporal coordinate is examined in sections. In On the Imaginaries in Geometry (1922, Florensky argues that the speed of light is not, as in Relativity, an absolute speed limit in the universe. When bodies approach and then surpass the speed of light, they are transformed into unextended, eternal Platonic forms. Beyond this point, time runs in reverse, effects precede their causes, and efficient causality is transformed into final or teleological causality, a concept on which Florensky elaborates in his Iconostasis. Florensky thus transformed the findings of Einsteinian relativity in order to make room for Plato’s intelligible Ideas, the Aristotelian distinction between a changing realm of earth and the immutable realm of the heavens, and the notion of teleology or final causation. His notion that man can approximate God’s vision of past, present and future all at once, as if from above, is reminiscent of Boethius’ ideas.

  4. Just in Time in Space or Space Based JIT

    Science.gov (United States)

    VanOrsdel, Kathleen G.

    1995-01-01

    Our satellite systems are mega-buck items. In today's cost conscious world, we need to reduce the overall costs of satellites if our space program is to survive. One way to accomplish this would be through on-orbit maintenance of parts on the orbiting craft. In order to accomplish maintenance at a low cost I advance the hypothesis of having parts and pieces (spares) waiting. Waiting in the sense of having something when you need it, or just-in-time. The JIT concept can actually be applied to space processes. Its definition has to be changed just enough to encompass the needs of space. Our space engineers tell us which parts and pieces the satellite systems might be needing once in orbit. These items are stored in space for the time of need and can be ready when they are needed -- or Space Based JIT. When a system has a problem, the repair facility is near by and through human or robotics intervention, it can be brought back into service. Through a JIT process, overall system costs could be reduced as standardization of parts is built into satellite systems to facilitate reduced numbers of parts being stored. Launch costs will be contained as fewer spare pieces need to be included in the launch vehicle and the space program will continue to thrive even in this era of reduced budgets. The concept of using an orbiting parts servicer and human or robotics maintenance/repair capabilities would extend satellite life-cycle and reduce system replacement launches. Reductions of this nature throughout the satellite program result in cost savings.

  5. Minkowski space-time is locally extendible

    International Nuclear Information System (INIS)

    Beem, J.K.

    1980-01-01

    An example of a real analytic local extension of Minkowski space-time is given in this note. This local extension is not across points of the b-boundary since Minkowski space-time has an empty b-boundary. Furthermore, this local extension is not across points of the causal boundary. The example indicates that the concept of local inextendibility may be less useful than originally envisioned. (orig.)

  6. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  7. Coupling gravity, electromagnetism and space-time for space propulsion breakthroughs

    Science.gov (United States)

    Millis, Marc G.

    1994-01-01

    spaceflight would be revolutionized if it were possible to propel a spacecraft without rockets using the coupling between gravity, electromagnetism, and space-time (hence called 'space coupling propulsion'). New theories and observations about the properties of space are emerging which offer new approaches to consider this breakthrough possibility. To guide the search, evaluation, and application of these emerging possibilities, a variety of hypothetical space coupling propulsion mechanisms are presented to highlight the issues that would have to be satisfied to enable such breakthroughs. A brief introduction of the emerging opportunities is also presented.

  8. Space-Time Discrete KPZ Equation

    Science.gov (United States)

    Cannizzaro, G.; Matetski, K.

    2018-03-01

    We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.

  9. Pre-Big Bang, space-time structure, asymptotic Universe. Spinorial space-time and a new approach to Friedmann-like equations

    Science.gov (United States)

    Gonzalez-Mestres, Luis

    2014-04-01

    Planck and other recent data in Cosmology and Particle Physics can open the way to controversial analyses concerning the early Universe and its possible ultimate origin. Alternatives to standard cosmology include pre-Big Bang approaches, new space-time geometries and new ultimate constituents of matter. Basic issues related to a possible new cosmology along these lines clearly deserve further exploration. The Planck collaboration reports an age of the Universe t close to 13.8 Gyr and a present ratio H between relative speeds and distances at cosmic scale around 67.3 km/s/Mpc. The product of these two measured quantities is then slightly below 1 (about 0.95), while it can be exactly 1 in the absence of matter and cosmological constant in patterns based on the spinorial space-time we have considered in previous papers. In this description of space-time we first suggested in 1996-97, the cosmic time t is given by the modulus of a SU(2) spinor and the Lundmark-Lemaître-Hubble (LLH) expansion law turns out to be of purely geometric origin previous to any introduction of standard matter and relativity. Such a fundamental geometry, inspired by the role of half-integer spin in Particle Physics, may reflect an equilibrium between the dynamics of the ultimate constituents of matter and the deep structure of space and time. Taking into account the observed cosmic acceleration, the present situation suggests that the value of 1 can be a natural asymptotic limit for the product H t in the long-term evolution of our Universe up to possible small corrections. In the presence of a spinorial space-time geometry, no ad hoc combination of dark matter and dark energy would in any case be needed to get an acceptable value of H and an evolution of the Universe compatible with observation. The use of a spinorial space-time naturally leads to unconventional properties for the space curvature term in Friedmann-like equations. It therefore suggests a major modification of the standard

  10. Evaluation of spatial variability of metal bioavailability in soils using geostatistics

    DEFF Research Database (Denmark)

    Owsianiak, Mikolaj; Hauschild, Michael Zwicky; Rosenbaum, Ralph K.

    2012-01-01

    Soil properties show signifficant spatial variability at local, regional and continental scales. This is a challenge for life cycle impact assessment (LCIA) of metals, because fate, bioavailability and effect factors are controlled by environmental chemistry and can vary orders of magnitude...... is performed using ArcGIS Geostatistical Analyst. Results show that BFs of copper span a range of 6 orders of magnitude, and have signifficant spatial variability at local and continental scales. The model nugget variance is signifficantly higher than zero, suggesting the presence of spatial variability...

  11. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.

    2014-12-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  12. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam

    2014-01-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  13. The role of geostatistics in medical geology

    Science.gov (United States)

    Goovaerts, Pierre

    2014-05-01

    Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences, to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential 'causes' of disease, such as environmental exposure, diet and unhealthy behaviors, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentrations across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level. Arsenic in drinking-water is a major problem and has received much attention because of the large human population exposed and the extremely high concentrations (e.g. 600 to 700 μg/L) recorded in many instances. Few studies have however assessed the risks associated with exposure to low levels of arsenic (say water in the United States. In the Michigan thumb region, arsenopyrite (up to 7% As by weight) has been identified in the bedrock of the Marshall Sandstone aquifer, one of the region's most productive aquifers. Epidemiologic studies have suggested a possible associationbetween exposure to inorganic arsenic and prostate cancer mortality, including a study of populations residing in Utah. The information available for the present ecological study (i.e. analysis of

  14. High efficiency thin-film solar cells for space applications: challenges and opportunities

    NARCIS (Netherlands)

    Leest, R.H. van

    2017-01-01

    In theory high efficiency thin-film III-V solar cells obtained by the epitaxial lift-off (ELO) technique offer excellent characteristics for application in space solar panels. The thesis describes several studies that investigate the space compatibility of the thin-film solar cell design developed

  15. Space-time clusters for early detection of grizzly bear predation.

    Science.gov (United States)

    Kermish-Wells, Joseph; Massolo, Alessandro; Stenhouse, Gordon B; Larsen, Terrence A; Musiani, Marco

    2018-01-01

    on space-time probability models allows for prompt visits to predation sites. This enables accurate identification of the carcass size and increases fieldwork efficiency in predation studies.

  16. The Cauchy problem for space-time monopole equations in Sobolev spaces

    Science.gov (United States)

    Huh, Hyungjin; Yim, Jihyun

    2018-04-01

    We consider the initial value problem of space-time monopole equations in one space dimension with initial data in Sobolev space Hs. Observing null structures of the system, we prove local well-posedness in almost critical space. Unconditional uniqueness and global existence are proved for s ≥ 0. Moreover, we show that the H1 Sobolev norm grows at a rate of at most c exp(ct2).

  17. Geostatistical uncertainty of assessing air quality using high-spatial-resolution lichen data: A health study in the urban area of Sines, Portugal.

    Science.gov (United States)

    Ribeiro, Manuel C; Pinho, P; Branquinho, C; Llop, Esteve; Pereira, Maria J

    2016-08-15

    In most studies correlating health outcomes with air pollution, personal exposure assignments are based on measurements collected at air-quality monitoring stations not coinciding with health data locations. In such cases, interpolators are needed to predict air quality in unsampled locations and to assign personal exposures. Moreover, a measure of the spatial uncertainty of exposures should be incorporated, especially in urban areas where concentrations vary at short distances due to changes in land use and pollution intensity. These studies are limited by the lack of literature comparing exposure uncertainty derived from distinct spatial interpolators. Here, we addressed these issues with two interpolation methods: regression Kriging (RK) and ordinary Kriging (OK). These methods were used to generate air-quality simulations with a geostatistical algorithm. For each method, the geostatistical uncertainty was drawn from generalized linear model (GLM) analysis. We analyzed the association between air quality and birth weight. Personal health data (n=227) and exposure data were collected in Sines (Portugal) during 2007-2010. Because air-quality monitoring stations in the city do not offer high-spatial-resolution measurements (n=1), we used lichen data as an ecological indicator of air quality (n=83). We found no significant difference in the fit of GLMs with any of the geostatistical methods. With RK, however, the models tended to fit better more often and worse less often. Moreover, the geostatistical uncertainty results showed a marginally higher mean and precision with RK. Combined with lichen data and land-use data of high spatial resolution, RK is a more effective geostatistical method for relating health outcomes with air quality in urban areas. This is particularly important in small cities, which generally do not have expensive air-quality monitoring stations with high spatial resolution. Further, alternative ways of linking human activities with their

  18. On Yang's Noncommutative Space Time Algebra, Holography, Area Quantization and C-space Relativity

    CERN Document Server

    Castro, C

    2004-01-01

    An isomorphism between Yang's Noncommutative space-time algebra (involving two length scales) and the holographic-area-coordinates algebra of C-spaces (Clifford spaces) is constructed via an AdS_5 space-time which is instrumental in explaining the origins of an extra (infrared) scale R in conjunction to the (ultraviolet) Planck scale lambda characteristic of C-spaces. Yang's space-time algebra allowed Tanaka to explain the origins behind the discrete nature of the spectrum for the spatial coordinates and spatial momenta which yields a minimum length-scale lambda (ultraviolet cutoff) and a minimum momentum p = (\\hbar / R) (maximal length R, infrared cutoff). The double-scaling limit of Yang's algebra : lambda goes to 0, and R goes to infinity, in conjunction with the large n infinity limit, leads naturally to the area quantization condition : lambda R = L^2 = n lambda^2 (in Planck area units) given in terms of the discrete angular-momentum eigenvalues n . The generalized Weyl-Heisenberg algebra in C-spaces is ...

  19. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    Science.gov (United States)

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that

  20. Space, Time, and Spacetime Physical and Philosophical Implications of Minkowski's Unification of Space and Time

    CERN Document Server

    Petkov, Vesselin

    2010-01-01

    This volume is dedicated to the centennial anniversary of Minkowski's discovery of spacetime. It contains selected papers by physicists and philosophers on the Nature and Ontology of Spacetime. The first six papers, comprising Part I of the book, provide examples of the impact of Minkowski's spacetime representation of special relativity on the twentieth century physics. Part II also contains six papers which deal with implications of Minkowski's ideas for the philosophy of space and time. The last part is represented by two papers which explore the influence of Minkowski's ideas beyond the philosophy of space and time.

  1. Empty space-times with separable Hamilton-Jacobi equation

    International Nuclear Information System (INIS)

    Collinson, C.D.; Fugere, J.

    1977-01-01

    All empty space-times admitting a one-parameter group of motions and in which the Hamilton-Jacobi equation is (partially) separable are obtained. Several different cases of such empty space-times exist and the Riemann tensor is found to be either type D or N. The results presented here complete the search for empty space-times with separable Hamilton-Jacobi equation. (author)

  2. Time-varying value of electric energy efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Mims, Natalie A.; Eckman, Tom; Goldman, Charles

    2017-06-30

    Electric energy efficiency resources save energy and may reduce peak demand. Historically, quantification of energy efficiency benefits has largely focused on the economic value of energy savings during the first year and lifetime of the installed measures. Due in part to the lack of publicly available research on end-use load shapes (i.e., the hourly or seasonal timing of electricity savings) and energy savings shapes, consideration of the impact of energy efficiency on peak demand reduction (i.e., capacity savings) has been more limited. End-use load research and the hourly valuation of efficiency savings are used for a variety of electricity planning functions, including load forecasting, demand-side management and evaluation, capacity and demand response planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service. This study reviews existing literature on the time-varying value of energy efficiency savings, provides examples in four geographically diverse locations of how consideration of the time-varying value of efficiency savings impacts the calculation of power system benefits, and identifies future research needs to enhance the consideration of the time-varying value of energy efficiency in cost-effectiveness screening analysis. Findings from this study include: -The time-varying value of individual energy efficiency measures varies across the locations studied because of the physical and operational characteristics of the individual utility system (e.g., summer or winter peaking, load factor, reserve margin) as well as the time periods during which savings from measures occur. -Across the four locations studied, some of the largest capacity benefits from energy efficiency are derived from the deferral of transmission and distribution system infrastructure upgrades. However, the deferred cost of such upgrades also exhibited the greatest range

  3. Time-space noncommutativity: quantised evolutions

    International Nuclear Information System (INIS)

    Balachandran, Aiyalam P.; Govindarajan, Thupil R.; Teotonio-Sobrinho, Paulo; Martins, Andrey Gomes

    2004-01-01

    In previous work, we developed quantum physics on the Moyal plane with time-space noncommutativity, basing ourselves on the work of Doplicher et al. Here we extend it to certain noncommutative versions of the cylinder, R 3 and Rx S 3 . In all these models, only discrete time translations are possible, a result known before in the first two cases. One striking consequence of quantised time translations is that even though a time independent hamiltonian is an observable, in scattering processes, it is conserved only modulo 2π/θ, where θ is the noncommutative parameter. (In contrast, on a one-dimensional periodic lattice of lattice spacing a and length L = Na, only momentum mod 2π/L is observable (and can be conserved).) Suggestions for further study of this effect are made. Scattering theory is formulated and an approach to quantum field theory is outlined. (author)

  4. Is space-time symmetry a suitable generalization of parity-time symmetry?

    International Nuclear Information System (INIS)

    Amore, Paolo; Fernández, Francisco M.; Garcia, Javier

    2014-01-01

    We discuss space-time symmetric Hamiltonian operators of the form H=H 0 +igH ′ , where H 0 is Hermitian and g real. H 0 is invariant under the unitary operations of a point group G while H ′ is invariant under transformation by elements of a subgroup G ′ of G. If G exhibits irreducible representations of dimension greater than unity, then it is possible that H has complex eigenvalues for sufficiently small nonzero values of g. In the particular case that H is parity-time symmetric then it appears to exhibit real eigenvalues for all 00. We illustrate the main theoretical results and conclusions of this paper by means of two- and three-dimensional Hamiltonians exhibiting a variety of different point-group symmetries. - Highlights: • Space-time symmetry is a generalization of PT symmetry. • The eigenvalues of a space-time Hamiltonian are either real or appear as pairs of complex conjugate numbers. • In some cases all the eigenvalues are real for some values of a potential-strength parameter g. • At some value of g space-time symmetry is broken and complex eigenvalues appear. • Some multidimensional oscillators exhibit broken space-time symmetry for all values of g

  5. Quaternion wave equations in curved space-time

    Science.gov (United States)

    Edmonds, J. D., Jr.

    1974-01-01

    The quaternion formulation of relativistic quantum theory is extended to include curvilinear coordinates and curved space-time in order to provide a framework for a unified quantum/gravity theory. Six basic quaternion fields are identified in curved space-time, the four-vector basis quaternions are identified, and the necessary covariant derivatives are obtained. Invariant field equations are derived, and a general invertable coordinate transformation is developed. The results yield a way of writing quaternion wave equations in curvilinear coordinates and curved space-time as well as a natural framework for solving the problem of second quantization for gravity.

  6. Time as a geometric property of space

    Directory of Open Access Journals (Sweden)

    James Michael Chappell

    2016-11-01

    Full Text Available The proper description of time remains a key unsolved problem in science. Newton conceived of time as absolute and universal which it `flows equably without relation to anything external'}. In the nineteenth century, the four-dimensional algebraic structure of the quaternions developed by Hamilton, inspired him to suggest that they could provide a unified representation of space and time. With the publishing of Einstein's theory of special relativity these ideas then lead to the generally accepted Minkowski spacetime formulation in 1908. Minkowski, though, rejected the formalism of quaternions suggested by Hamilton and adopted rather an approach using four-vectors. The Minkowski framework is indeed found to provide a versatile formalism for describing the relationship between space and time in accordance with Einstein's relativistic principles, but nevertheless fails to provide more fundamental insights into the nature of time itself. In order to answer this question we begin by exploring the geometric properties of three-dimensional space that we model using Clifford geometric algebra, which is found to contain sufficient complexity to provide a natural description of spacetime. This description using Clifford algebra is found to provide a natural alternative to the Minkowski formulation as well as providing new insights into the nature of time. Our main result is that time is the scalar component of a Clifford space and can be viewed as an intrinsic geometric property of three-dimensional space without the need for the specific addition of a fourth dimension.

  7. Dying in their prime: determinants and space-time risk of adult mortality in rural South Africa

    Science.gov (United States)

    Sartorius, Benn; Kahn, Kathleen; Collinson, Mark A.; Sartorius, Kurt; Tollman, Stephen M.

    2013-01-01

    A longitudinal dataset was used to investigate adult mortality in rural South Africa in order to determine location, trends, high impact determinants and policy implications. Adult (15-59 years) mortality data for the period 1993-2010 were extracted from the health and socio-demographic surveillance system (HDSS) in the rural sub-district of Agincourt. A Bayesian geostatistical frailty survival model was used to quantify significant associations between adult mortality and various multilevel (individual, household and community) variables. It was found that adult mortality significantly increased over time with a reduction observed late in the study period. Non-communicable disease mortality appeared to increase and decrease in parallel with communicable mortality, whilst deaths due to external causes remained constant. Male gender, unemployment, circular (labour) migrant status, age and gender of household heads, partner and/or other household death, low education and low household socioeconomic status (SES) were identified as significant and highly attributable determinants of adult mortality. Health facility remoteness was also a risk for adult mortality and households falling outside a critical buffering zone were identified. Spatial foci of higher adult mortality risk were observed indicating a strong non-random pattern. Communicable diseases differed from non-communicable diseases with respect to spatial distribution of mortality. Areas with significant excess mortality risk (hotspots) were found to be part of a complex interaction of highly attributable factors that continues to drive differential space-time risk patterns of communicable (HIV/AIDS and Tuberculosis) mortality in Agincourt. The impact of HIV mortality and its subsequent lowering due to the introduction of antiretroviral therapy (ART) was found to be clearly evident in this rural population. PMID:23733287

  8. Space-time of class one

    International Nuclear Information System (INIS)

    Villasenor, R.F.; Bonilla, J.L.L.; Zuniga, G.O.; Matos, T.

    1989-01-01

    The authors study space-times embedded in E 5 (that means, pseudo-euclidean five-dimensional spaces) in the intrinsic rigidity case, i.e., when the second fundamental form b if can be determined by the internal geometry of the four-dimensional Riemannian space R 4 . They write down the Gauss and Codazzi equations determining the local isometric embedding of R 4 in E 5 and give some consequences of it. They prove that when there exists intrinsic rigidity, then b if is a linear combination of the metric and Ricci tensor; it is given some applications for the de Sitter and Einstein models

  9. Geostatistical characterization of the Callovo-Oxfordian clay variability: from conventional and high resolution log data

    International Nuclear Information System (INIS)

    Lefranc, Marie

    2007-01-01

    Andra (National Radioactive Waste Management Agency) has conducted studies in its Meuse/Haute-Marne Underground Research Laboratory located at a depth of about 490 m in a 155-million-year-old argillaceous rock: the Callovo-Oxfordian argillite. The purpose of the present work is to obtain as much information as possible from high-resolution log data and to optimize their analysis to specify and characterize space-time variations of the argillites from the Meuse/Haute-Marne site and subsequently predict the evolution of argillite properties on a 250 km 2 zone around the underground laboratory (transposition zone). The aim is to outline a methodology to transform depth intervals into geological time intervals and thus to quantify precisely the sedimentation rate variation, estimate duration; for example the duration of bio-stratigraphical units or of hiatuses. The latter point is particularly important because a continuous time recording is often assumed in geological modelling. The spatial variations can be studied on various scales. First, well-to-well correlations are established between seven wells at different scales. Relative variations of the thickness are observed locally. Second, FMI (Full-bore Formation Micro-Imager, Schlumberger) data are studied in detail to extract as much information as possible. For example, the analysis of FMI images reveals a clear carbonate - clay inter-bedding which displays cycles. Third, geostatistical tools are used to study these cycles. The vario-graphic analysis of conventional log data shows one metre cycles. With FMI data, smaller periods can be detected. Variogram modelling and factorial kriging analysis suggest that three spatial periods exist. They vary vertically and laterally in the boreholes but cycle ratios are stable and similar to orbital-cycle ratios (Milankovitch cycles). The three periods correspond to eccentricity, obliquity and precession. Since the duration of these orbital cycles is known, depth intervals can

  10. Aging in a Relativistic Biological Space-Time

    Directory of Open Access Journals (Sweden)

    Davide Maestrini

    2018-05-01

    Full Text Available Here we present a theoretical and mathematical perspective on the process of aging. We extend the concepts of physical space and time to an abstract, mathematically-defined space, which we associate with a concept of “biological space-time” in which biological dynamics may be represented. We hypothesize that biological dynamics, represented as trajectories in biological space-time, may be used to model and study different rates of biological aging. As a consequence of this hypothesis, we show how dilation or contraction of time analogous to relativistic corrections of physical time resulting from accelerated or decelerated biological dynamics may be used to study precipitous or protracted aging. We show specific examples of how these principles may be used to model different rates of aging, with an emphasis on cancer in aging. We discuss how this theory may be tested or falsified, as well as novel concepts and implications of this theory that may improve our interpretation of biological aging.

  11. Natural world physical, brain operational, and mind phenomenal space-time

    Science.gov (United States)

    Fingelkurts, Andrew A.; Fingelkurts, Alexander A.; Neves, Carlos F. H.

    2010-06-01

    Concepts of space and time are widely developed in physics. However, there is a considerable lack of biologically plausible theoretical frameworks that can demonstrate how space and time dimensions are implemented in the activity of the most complex life-system - the brain with a mind. Brain activity is organized both temporally and spatially, thus representing space-time in the brain. Critical analysis of recent research on the space-time organization of the brain's activity pointed to the existence of so-called operational space-time in the brain. This space-time is limited to the execution of brain operations of differing complexity. During each such brain operation a particular short-term spatio-temporal pattern of integrated activity of different brain areas emerges within related operational space-time. At the same time, to have a fully functional human brain one needs to have a subjective mental experience. Current research on the subjective mental experience offers detailed analysis of space-time organization of the mind. According to this research, subjective mental experience (subjective virtual world) has definitive spatial and temporal properties similar to many physical phenomena. Based on systematic review of the propositions and tenets of brain and mind space-time descriptions, our aim in this review essay is to explore the relations between the two. To be precise, we would like to discuss the hypothesis that via the brain operational space-time the mind subjective space-time is connected to otherwise distant physical space-time reality.

  12. Development, Qualification and Production of Space Solar Cells with 30% EOL Efficiency

    Science.gov (United States)

    Guter, Wolfgang; Ebel, Lars; Fuhrmann, Daniel; Kostler, Wolfgang; Meusel, Matthias

    2014-08-01

    AZUR SPACE's latest qualified solar cell product 3G30-advanced provides a high end-of-life (EOL) efficiency of 27.8% for 5E14 (1 MeV e-/cm2) at low production costs. In order to further reduce the mass, the 3G30-advanced was thinned down to as thin as 20 μm and tested in space. Next generation solar cells must exceed the EOL efficiency of the 3G30-advanced and therefore will utilize the excess current of the Ge subcell. This can be achieved by a metamorphic cell concept. While average beginning-of-life efficiencies above 31% have already been demonstrated with upright metamorphic triple-junction cells, AZUR's next generation product will comprise a metamorphic 4- junction device targeting 30% EOL.

  13. Topology of classical vacuum space-time

    International Nuclear Information System (INIS)

    Cho, Y.M.

    2007-04-01

    We present a topological classification of classical vacuum space-time. Assuming the 3-dimensional space allows a global chart, we show that the static vacuum space-time of Einstein's theory can be classified by the knot topology π 3 (S 3 ) = π 3 (S 2 ). Viewing Einstein's theory as a gauge theory of Lorentz group and identifying the gravitational connection as the gauge potential of Lorentz group, we construct all possible vacuum gravitational connections which give a vanishing curvature tensor. With this we show that the vacuum connection has the knot topology, the same topology which describes the multiple vacua of SU(2) gauge theory. We discuss the physical implications of our result in quantum gravity. (author)

  14. Land use and land cover change based on historical space-time model

    Science.gov (United States)

    Sun, Qiong; Zhang, Chi; Liu, Min; Zhang, Yongjing

    2016-09-01

    Land use and cover change is a leading edge topic in the current research field of global environmental changes and case study of typical areas is an important approach understanding global environmental changes. Taking the Qiantang River (Zhejiang, China) as an example, this study explores automatic classification of land use using remote sensing technology and analyzes historical space-time change by remote sensing monitoring. This study combines spectral angle mapping (SAM) with multi-source information and creates a convenient and efficient high-precision land use computer automatic classification method which meets the application requirements and is suitable for complex landform of the studied area. This work analyzes the histological space-time characteristics of land use and cover change in the Qiantang River basin in 2001, 2007 and 2014, in order to (i) verify the feasibility of studying land use change with remote sensing technology, (ii) accurately understand the change of land use and cover as well as historical space-time evolution trend, (iii) provide a realistic basis for the sustainable development of the Qiantang River basin and (iv) provide a strong information support and new research method for optimizing the Qiantang River land use structure and achieving optimal allocation of land resources and scientific management.

  15. Geostatistical evaluation of integrated marsh management impact on mosquito vectors using before-after-control-impact (BACI design

    Directory of Open Access Journals (Sweden)

    Dempsey Mary E

    2009-06-01

    Full Text Available Abstract Background In many parts of the world, salt marshes play a key ecological role as the interface between the marine and the terrestrial environments. Salt marshes are also exceedingly important for public health as larval habitat for mosquitoes that are vectors of disease and significant biting pests. Although grid ditching and pesticides have been effective in salt marsh mosquito control, marsh degradation and other environmental considerations compel a different approach. Targeted habitat modification and biological control methods known as Open Marsh Water Management (OMWM had been proposed as a viable alternative to marsh-wide physical alterations and chemical control. However, traditional larval sampling techniques may not adequately assess the impacts of marsh management on mosquito larvae. To assess the effectiveness of integrated OMWM and marsh restoration techniques for mosquito control, we analyzed the results of a 5-year OMWM/marsh restoration project to determine changes in mosquito larval production using GIS and geostatistical methods. Methods The following parameters were evaluated using "Before-After-Control-Impact" (BACI design: frequency and geographic extent of larval production, intensity of larval production, changes in larval habitat, and number of larvicide applications. The analyses were performed using Moran's I, Getis-Ord, and Spatial Scan statistics on aggregated before and after data as well as data collected over time. This allowed comparison of control and treatment areas to identify changes attributable to the OMWM/marsh restoration modifications. Results The frequency of finding mosquito larvae in the treatment areas was reduced by 70% resulting in a loss of spatial larval clusters compared to those found in the control areas. This effect was observed directly following OMWM treatment and remained significant throughout the study period. The greatly reduced frequency of finding larvae in the treatment

  16. Geostatistical evaluation of integrated marsh management impact on mosquito vectors using before-after-control-impact (BACI) design.

    Science.gov (United States)

    Rochlin, Ilia; Iwanejko, Tom; Dempsey, Mary E; Ninivaggi, Dominick V

    2009-06-23

    In many parts of the world, salt marshes play a key ecological role as the interface between the marine and the terrestrial environments. Salt marshes are also exceedingly important for public health as larval habitat for mosquitoes that are vectors of disease and significant biting pests. Although grid ditching and pesticides have been effective in salt marsh mosquito control, marsh degradation and other environmental considerations compel a different approach. Targeted habitat modification and biological control methods known as Open Marsh Water Management (OMWM) had been proposed as a viable alternative to marsh-wide physical alterations and chemical control. However, traditional larval sampling techniques may not adequately assess the impacts of marsh management on mosquito larvae. To assess the effectiveness of integrated OMWM and marsh restoration techniques for mosquito control, we analyzed the results of a 5-year OMWM/marsh restoration project to determine changes in mosquito larval production using GIS and geostatistical methods. The following parameters were evaluated using "Before-After-Control-Impact" (BACI) design: frequency and geographic extent of larval production, intensity of larval production, changes in larval habitat, and number of larvicide applications. The analyses were performed using Moran's I, Getis-Ord, and Spatial Scan statistics on aggregated before and after data as well as data collected over time. This allowed comparison of control and treatment areas to identify changes attributable to the OMWM/marsh restoration modifications. The frequency of finding mosquito larvae in the treatment areas was reduced by 70% resulting in a loss of spatial larval clusters compared to those found in the control areas. This effect was observed directly following OMWM treatment and remained significant throughout the study period. The greatly reduced frequency of finding larvae in the treatment areas led to a significant decrease (approximately 44%) in

  17. Quantum Space-Time Deformed Symmetries Versus Broken Symmetries

    CERN Document Server

    Amelino-Camelia, G

    2002-01-01

    Several recent studies have concerned the faith of classical symmetries in quantum space-time. In particular, it appears likely that quantum (discretized, noncommutative,...) versions of Minkowski space-time would not enjoy the classical Lorentz symmetries. I compare two interesting cases: the case in which the classical symmetries are "broken", i.e. at the quantum level some classical symmetries are lost, and the case in which the classical symmetries are "deformed", i.e. the quantum space-time has as many symmetries as its classical counterpart but the nature of these symmetries is affected by the space-time quantization procedure. While some general features, such as the emergence of deformed dispersion relations, characterize both the symmetry-breaking case and the symmetry-deformation case, the two scenarios are also characterized by sharp differences, even concerning the nature of the new effects predicted. I illustrate this point within an illustrative calculation concerning the role of space-time symm...

  18. Space-Time Diffeomorphisms in Noncommutative Gauge Theories

    Directory of Open Access Journals (Sweden)

    L. Román Juarez

    2008-07-01

    Full Text Available In previous work [Rosenbaum M. et al., J. Phys. A: Math. Theor. 40 (2007, 10367–10382] we have shown how for canonical parametrized field theories, where space-time is placed on the same footing as the other fields in the theory, the representation of space-time diffeomorphisms provides a very convenient scheme for analyzing the induced twisted deformation of these diffeomorphisms, as a result of the space-time noncommutativity. However, for gauge field theories (and of course also for canonical geometrodynamics where the Poisson brackets of the constraints explicitely depend on the embedding variables, this Poisson algebra cannot be connected directly with a representation of the complete Lie algebra of space-time diffeomorphisms, because not all the field variables turn out to have a dynamical character [Isham C.J., Kuchar K.V., Ann. Physics 164 (1985, 288–315, 316–333]. Nonetheless, such an homomorphic mapping can be recuperated by first modifying the original action and then adding additional constraints in the formalism in order to retrieve the original theory, as shown by Kuchar and Stone for the case of the parametrized Maxwell field in [Kuchar K.V., Stone S.L., Classical Quantum Gravity 4 (1987, 319–328]. Making use of a combination of all of these ideas, we are therefore able to apply our canonical reparametrization approach in order to derive the deformed Lie algebra of the noncommutative space-time diffeomorphisms as well as to consider how gauge transformations act on the twisted algebras of gauge and particle fields. Thus, hopefully, adding clarification on some outstanding issues in the literature concerning the symmetries for gauge theories in noncommutative space-times.

  19. The philosophy of space and time

    CERN Document Server

    Reichenbach, Hans

    1958-01-01

    With unusual depth and clarity, the author covers the problem of the foundations of geometry, the theory of time, the theory and consequences of Einstein's relativity including: relations between theory and observations, coordinate definitions, relations between topological and metrical properties of space, the psychological problem of the possibility of a visual intuition of non-Euclidean structures, and many other important topics in modern science and philosophy. While some of the book utilizes mathematics of a somewhat advanced nature, the exposition is so careful and complete that most people familiar with the philosophy of science or some intermediate mathematics will understand the majority of the ideas and problems discussed. Partial contents: I. The Problem of Physical Geometry. Universal and Differential Forces. Visualization of Geometries. Spaces with non-Euclidean Topological Properties. Geometry as a Theory of Relations. II. The Difference between Space and Time. Simultaneity. Time Order. Unreal ...

  20. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    Science.gov (United States)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  1. Time-Varying Value of Energy Efficiency in Michigan

    Energy Technology Data Exchange (ETDEWEB)

    Mims, Natalie; Eckman, Tom; Schwartz, Lisa C.

    2018-04-02

    Quantifying the time-varying value of energy efficiency is necessary to properly account for all of its benefits and costs and to identify and implement efficiency resources that contribute to a low-cost, reliable electric system. Historically, most quantification of the benefits of efficiency has focused largely on the economic value of annual energy reduction. Due to the lack of statistically representative metered end-use load shape data in Michigan (i.e., the hourly or seasonal timing of electricity savings), the ability to confidently characterize the time-varying value of energy efficiency savings in the state, especially for weather-sensitive measures such as central air conditioning, is limited. Still, electric utilities in Michigan can take advantage of opportunities to incorporate the time-varying value of efficiency into their planning. For example, end-use load research and hourly valuation of efficiency savings can be used for a variety of electricity planning functions, including load forecasting, demand-side management and evaluation, capacity planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service (KEMA 2012). In addition, accurately calculating the time-varying value of efficiency may help energy efficiency program administrators prioritize existing offerings, set incentive or rebate levels that reflect the full value of efficiency, and design new programs.

  2. Innovative Technologies for Efficient Pharmacotherapeutic Management in Space

    Science.gov (United States)

    Putcha, Lakshmi; Daniels, Vernie

    2014-01-01

    Current and future Space exploration missions and extended human presence in space aboard the ISS will expose crew to risks that differ both quantitatively and qualitatively from those encountered before by space travelers and will impose an unknown risk of safety and crew health. The technology development challenges for optimizing therapeutics in space must include the development of pharmaceuticals with extended stability, optimal efficacy and bioavailability with minimal toxicity and side effects. Innovative technology development goals may include sustained/chronic delivery preventive health care products and vaccines, low-cost high-efficiency noninvasive, non-oral dosage forms with radio-protective formulation matrices and dispensing technologies coupled with self-reliant tracking technologies for quality assurance and quality control assessment. These revolutionary advances in pharmaceutical technology will assure human presence in space and healthy living on Earth. Additionally, the Joint Commission on Accreditation of Healthcare Organizations advocates the use of health information technologies to effectively execute all aspects of medication management (prescribing, dispensing, and administration). The advent of personalized medicine and highly streamlined treatment regimens stimulated interest in new technologies for medication management. Intelligent monitoring devices enhance medication accountability compliance, enable effective drug use, and offer appropriate storage and security conditions for dangerous drug and controlled substance medications in remote sites where traditional pharmacies are unavailable. These features are ideal for Exploration Medical Capabilities. This presentation will highlight current novel commercial off-the-shelf (COTS) intelligent medication management devices for the unique dispensing, therapeutic drug monitoring, medication tracking, and drug delivery demands of exploration space medical operations.

  3. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    /24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  4. Space-time reference with an optical link

    International Nuclear Information System (INIS)

    Berceau, P; Hollberg, L; Taylor, M; Kahn, J

    2016-01-01

    We describe a concept for realizing a high performance space-time reference using a stable atomic clock in a precisely defined orbit and synchronizing the orbiting clock to high-accuracy atomic clocks on the ground. The synchronization would be accomplished using a two-way lasercom link between ground and space. The basic approach is to take advantage of the highest-performance cold-atom atomic clocks at national standards laboratories on the ground and to transfer that performance to an orbiting clock that has good stability and that serves as a ‘frequency-flywheel’ over time-scales of a few hours. The two-way lasercom link would also provide precise range information and thus precise orbit determination. With a well-defined orbit and a synchronized clock, the satellite could serve as a high-accuracy space-time reference, providing precise time worldwide, a valuable reference frame for geodesy, and independent high-accuracy measurements of GNSS clocks. Under reasonable assumptions, a practical system would be able to deliver picosecond timing worldwide and millimeter orbit determination, and could serve as an enabling subsystem for other proposed space-gravity missions, which are briefly reviewed. (paper)

  5. Quantum space-times in the year 2002

    Indian Academy of Sciences (India)

    These ideas of space-time are suggested from developments in fuzzy physics, string theory, and deformation quantization. The review focuses on the ideas coming from fuzzy physics. We find models of quantum space-time like fuzzy 4 on which states cannot be localized, but which fluctuate into other manifolds like CP3.

  6. A short history of fractal-Cantorian space-time

    International Nuclear Information System (INIS)

    Marek-Crnjac, L.

    2009-01-01

    The article attempts to give a short historical overview of the discovery of fractal-Cantorian space-time starting from the 17th century up to the present. In the last 25 years a great number of scientists worked on fractal space-time notably Garnet Ord in Canada, Laurent Nottale in France and Mohamed El Naschie in England who gave an exact mathematical procedure for the derivation of the dimensionality and curvature of fractal space-time fuzzy manifold.

  7. High-Efficiency Rad-Hard Ultra-Thin Si Photovoltaic Cell Technology for Space, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Improvements to solar cell efficiency that is consistent with low cost, high volume fabrication techniques are critical for future NASA space missions. In this...

  8. Geostatistical Analysis Methods for Estimation of Environmental Data Homogeneity

    Directory of Open Access Journals (Sweden)

    Aleksandr Danilov

    2018-01-01

    Full Text Available The methodology for assessing the spatial homogeneity of ecosystems with the possibility of subsequent zoning of territories in terms of the degree of disturbance of the environment is considered in the study. The degree of pollution of the water body was reconstructed on the basis of hydrochemical monitoring data and information on the level of the technogenic load in one year. As a result, the greatest environmental stress zones were isolated and correct zoning using geostatistical analysis techniques was proved. Mathematical algorithm computing system was implemented in an object-oriented programming C #. A software application has been obtained that allows quickly assessing the scale and spatial localization of pollution during the initial analysis of the environmental situation.

  9. The algebraic approach to space-time geometry

    International Nuclear Information System (INIS)

    Heller, M.; Multarzynski, P.; Sasin, W.

    1989-01-01

    A differential manifold can be defined in terms of smooth real functions carried by it. By rejecting the postulate, in such a definition, demanding the local diffeomorphism of a manifold to the Euclidean space, one obtains the so-called differential space concept. Every subset of R n turns out to be a differential space. Extensive parts of differential geometry on differential spaces, developed by Sikorski, are reviewed and adapted to relativistic purposes. Differential space as a new model of space-time is proposed. The Lorentz structure and Einstein's field equations on differential spaces are discussed. 20 refs. (author)

  10. The Liquid Droplet Radiator - an Ultralightweight Heat Rejection System for Efficient Energy Conversion in Space

    Science.gov (United States)

    Mattick, A. T.; Hertzberg, A.

    1984-01-01

    A heat rejection system for space is described which uses a recirculating free stream of liquid droplets in place of a solid surface to radiate waste heat. By using sufficiently small droplets ( 100 micron diameter) of low vapor pressure liquids the radiating droplet sheet can be made many times lighter than the lightest solid surface radiators (heat pipes). The liquid droplet radiator (LDR) is less vulnerable to damage by micrometeoroids than solid surface radiators, and may be transported into space far more efficiently. Analyses are presented of LDR applications in thermal and photovoltaic energy conversion which indicate that fluid handling components (droplet generator, droplet collector, heat exchanger, and pump) may comprise most of the radiator system mass. Even the unoptimized models employed yield LDR system masses less than heat pipe radiator system masses, and significant improvement is expected using design approaches that incorporate fluid handling components more efficiently. Technical problems (e.g., spacecraft contamination and electrostatic deflection of droplets) unique to this method of heat rejectioon are discussed and solutions are suggested.

  11. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  12. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  13. Time-zero efficiency of European power derivatives markets

    International Nuclear Information System (INIS)

    Peña, Juan Ignacio; Rodriguez, Rosa

    2016-01-01

    We study time-zero efficiency of electricity derivatives markets. By time-zero efficiency is meant a sequence of prices of derivatives contracts having the same underlying asset but different times to maturity which implies that prices comply with a set of efficiency conditions that prevent profitable time-zero arbitrage opportunities. We investigate whether statistical tests, based on the law of one price, and trading rules, based on price differentials and no-arbitrage violations, are useful for assessing time-zero efficiency. We apply tests and trading rules to daily data of three European power markets: Germany, France and Spain. In the case of the German market, after considering liquidity availability and transaction costs, results are not inconsistent with time-zero efficiency. However, in the case of the French and Spanish markets, limitations in liquidity and representativeness are challenges that prevent definite conclusions. Liquidity in French and Spanish markets should improve by using pricing and marketing incentives. These incentives should attract more participants into the electricity derivatives exchanges and should encourage them to settle OTC trades in clearinghouses. Publication of statistics on prices, volumes and open interest per type of participant should be promoted. - Highlights: •We test time-zero efficiency of derivatives power markets in Germany, France and Spain. •Prices in Germany, considering liquidity and transaction costs, are time-zero efficient. •In France and Spain, limitations in liquidity and representativeness prevent conclusions. •Liquidity in France and Spain should improve by using pricing and marketing incentives. •Incentives attract participants to exchanges and encourage them to settle OTC trades in clearinghouses.

  14. Occupy: New Pedagogy of Space and Time?

    Directory of Open Access Journals (Sweden)

    Sarah Amsler

    2015-12-01

    Full Text Available This paper forms the first part of a project of inquiry to understand the theoretical and practical potentials of Occupy through the recent wave of occupations that have emerged in response to the politics of austerity and precarity around the world. We do this as educators who are seeking to ‘occupy’ spaces of higher education inside and outside of the institutions in which we work. Occupy points to the centrality of space and time as practical concepts through which it is possible to reconfigure revolutionary activity. By dealing with the concept (Occupy at this fundamental level of space and time through a critical engagement with Henri Lefebvre’s notion of ‘a new pedagogy of space and time’, we hope to open spaces for further revolutionary transformation by extending a critique of the politics of space and time into the institutions and idea of education itself. Lefebvre considers the ‘pedagogy of space and time’ as a basis for a new form of ‘counter-space’. He suggests that ‘deviant or diverted spaces, though initially subordinate, show distinct evidence of a true productive capacity’ (2008: 383, and in doing so reveal the breaking points of everyday life and the ways in which it might be appropriated as exuberant spaces full of enjoyment and hope. In the Production of Space, he identifies the space of leisure as a site within which such a resistance might be contemplated and activated. In our work we replace the principle of leisure with the concept of Occupy. We consider here how attempts to occupy the university curriculum, not as a programme of education but as the production of critical knowledge, may also constitute ‘a new pedagogy of space and time’. We will describe this occupation of higher education with reference to two projects with which we are involved Student as Producer and the Social Science Centre, the former at the University of Lincoln, and the latter across the city of Lincoln.

  15. Convexity and the Euclidean Metric of Space-Time

    Directory of Open Access Journals (Sweden)

    Nikolaos Kalogeropoulos

    2017-02-01

    Full Text Available We address the reasons why the “Wick-rotated”, positive-definite, space-time metric obeys the Pythagorean theorem. An answer is proposed based on the convexity and smoothness properties of the functional spaces purporting to provide the kinematic framework of approaches to quantum gravity. We employ moduli of convexity and smoothness which are eventually extremized by Hilbert spaces. We point out the potential physical significance that functional analytical dualities play in this framework. Following the spirit of the variational principles employed in classical and quantum Physics, such Hilbert spaces dominate in a generalized functional integral approach. The metric of space-time is induced by the inner product of such Hilbert spaces.

  16. Spontaneous symmetry breaking in curved space-time

    International Nuclear Information System (INIS)

    Toms, D.J.

    1982-01-01

    An approach dealing with some of the complications which arise when studying spontaneous symmetry breaking beyond the tree-graph level in situations where the effective potential may not be used is discussed. These situations include quantum field theory on general curved backgrounds or in flat space-times with non-trivial topologies. Examples discussed are a twisted scalar field in S 1 xR 3 and instabilities in an expanding universe. From these it is seen that the topology and curvature of a space-time may affect the stability of the vacuum state. There can be critical length scales or times beyond which symmetries may be broken or restored in certain cases. These features are not present in Minkowski space-time and so would not show up in the usual types of early universe calculations. (U.K.)

  17. Pre-Big Bang, space-time structure, asymptotic Universe

    Directory of Open Access Journals (Sweden)

    Gonzalez-Mestres Luis

    2014-04-01

    Full Text Available Planck and other recent data in Cosmology and Particle Physics can open the way to controversial analyses concerning the early Universe and its possible ultimate origin. Alternatives to standard cosmology include pre-Big Bang approaches, new space-time geometries and new ultimate constituents of matter. Basic issues related to a possible new cosmology along these lines clearly deserve further exploration. The Planck collaboration reports an age of the Universe t close to 13.8 Gyr and a present ratio H between relative speeds and distances at cosmic scale around 67.3 km/s/Mpc. The product of these two measured quantities is then slightly below 1 (about 0.95, while it can be exactly 1 in the absence of matter and cosmological constant in patterns based on the spinorial space-time we have considered in previous papers. In this description of space-time we first suggested in 1996-97, the cosmic time t is given by the modulus of a SU(2 spinor and the Lundmark-Lemaître-Hubble (LLH expansion law turns out to be of purely geometric origin previous to any introduction of standard matter and relativity. Such a fundamental geometry, inspired by the role of half-integer spin in Particle Physics, may reflect an equilibrium between the dynamics of the ultimate constituents of matter and the deep structure of space and time. Taking into account the observed cosmic acceleration, the present situation suggests that the value of 1 can be a natural asymptotic limit for the product H t in the long-term evolution of our Universe up to possible small corrections. In the presence of a spinorial space-time geometry, no ad hoc combination of dark matter and dark energy would in any case be needed to get an acceptable value of H and an evolution of the Universe compatible with observation. The use of a spinorial space-time naturally leads to unconventional properties for the space curvature term in Friedmann-like equations. It therefore suggests a major modification of

  18. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  19. Quantum field theory in curved space-time

    International Nuclear Information System (INIS)

    Najmi, A.-H.

    1982-09-01

    The problem of constructing states for quantum field theories in nonstationary background space-times is set out. A formalism in which the problem of constructing states can be attacked more easily than at present is presented. The ansatz of energy-minimization as a means of constructing states is formulated in this formalism and its general solution for the free scalar field is found. It has been known, in specific cases, that such states suffer from the problem of unitary inequivalence (the pathology). An example in Minowski space-time is presented in which global operators, such as the particle-number operator, do not exist but all physical observables, such as the renormalized energy density are finite. This model has two Fock-sectors as its space of physical states. A simple extension of this model, i.e. enlarging the Fock-space of states is found not to remedy the pathology: in a Robertson-Walker space-time the quantum field acquires an infinite amount of renormalized energy density to the future of the hypersurface on which the energy density is minimized. Finally, the solution of the ansatz of energy minimization for the free, massive Hermitian fermion field is presented. (author)

  20. Geostatistical analysis of space variation in underground water various quality parameters in Kłodzko water intake area (SW part of Poland

    Directory of Open Access Journals (Sweden)

    Namysłowska-Wilczyńska Barbara

    2016-09-01

    Full Text Available This paper presents selected results of research connected with the development of a (3D geostatistical hydrogeochemical model of the Kłodzko Drainage Basin, dedicated to the spatial variation in the different quality parameters of underground water in the water intake area (SW part of Poland. The research covers the period 2011-2012. Spatial analyses of the variation in various quality parameters, i.e., contents of: iron, manganese, ammonium ion, nitrate ion, phosphate ion, total organic carbon, pH redox potential and temperature, were carried out on the basis of the chemical determinations of the quality parameters of underground water samples taken from the wells in the water intake area. Spatial variation in the parameters was analyzed on the basis of data obtained (November 2011 from tests of water taken from 14 existing wells with a depth ranging from 9.5 to 38.0 m b.g.l. The latest data (January 2012 were obtained (gained from 3 new piezometers, made in other locations in the relevant area. A depth of these piezometers amounts to 9-10 m.

  1. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.

    Science.gov (United States)

    Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro

    2015-08-21

    Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  2. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC

    Directory of Open Access Journals (Sweden)

    Francisco Nombela

    2015-08-01

    Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  3. A composite model of the space-time and 'colors'

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1987-03-01

    A pregeometric and pregauge model of the space-time and ''colors'' in which the space-time metric and ''color'' gauge fields are both composite is presented. By the non-triviality of the model, the number of space-time dimensions is restricted to be not larger than the number of ''colors''. The long conjectured space-color correspondence is realized in the model action of the Nambu-Goto type which is invariant under both general-coordinate and local-gauge transformations. (author)

  4. Space-Time Code Designs for Broadband Wireless Communications

    National Research Council Canada - National Science Library

    Xia, Xiang-Gen

    2005-01-01

    The goal of this research is to design new space AND time codes, such as complex orthogonal space AND time block codes with rate above 1/2 from complex orthogonal designs for QAM, PSK, and CPM signals...

  5. Exploring space-time structure of human mobility in urban space

    Science.gov (United States)

    Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.

    2011-03-01

    Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.

  6. Improving primary health care facility performance in Ghana: efficiency analysis and fiscal space implications.

    Science.gov (United States)

    Novignon, Jacob; Nonvignon, Justice

    2017-06-12

    Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private

  7. Preconditioned iterative methods for space-time fractional advection-diffusion equations

    Science.gov (United States)

    Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.

    2016-08-01

    In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.

  8. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  9. Warped product space-times

    Science.gov (United States)

    An, Xinliang; Wong, Willie Wai Yeung

    2018-01-01

    Many classical results in relativity theory concerning spherically symmetric space-times have easy generalizations to warped product space-times, with a two-dimensional Lorentzian base and arbitrary dimensional Riemannian fibers. We first give a systematic presentation of the main geometric constructions, with emphasis on the Kodama vector field and the Hawking energy; the construction is signature independent. This leads to proofs of general Birkhoff-type theorems for warped product manifolds; our theorems in particular apply to situations where the warped product manifold is not necessarily Einstein, and thus can be applied to solutions with matter content in general relativity. Next we specialize to the Lorentzian case and study the propagation of null expansions under the assumption of the dominant energy condition. We prove several non-existence results relating to the Yamabe class of the fibers, in the spirit of the black-hole topology theorem of Hawking–Galloway–Schoen. Finally we discuss the effect of the warped product ansatz on matter models. In particular we construct several cosmological solutions to the Einstein–Euler equations whose spatial geometry is generally not isotropic.

  10. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  11. Space, time and the limits of human understanding

    CERN Document Server

    Ghirardi, Giancarlo

    2017-01-01

    In this compendium of essays, some of the world’s leading thinkers discuss their conceptions of space and time, as viewed through the lens of their own discipline. With an epilogue on the limits of human understanding, this volume hosts contributions from six or more diverse fields. It presumes only rudimentary background knowledge on the part of the reader. Time and again, through the prism of intellect, humans have tried to diffract reality into various distinct, yet seamless, atomic, yet holistic, independent, yet interrelated disciplines and have attempted to study it contextually. Philosophers debate the paradoxes, or engage in meditations, dialogues and reflections on the content and nature of space and time. Physicists, too, have been trying to mold space and time to fit their notions concerning micro- and macro-worlds. Mathematicians focus on the abstract aspects of space, time and measurement. While cognitive scientists ponder over the perceptual and experiential facets of our consciousness of spac...

  12. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    Science.gov (United States)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low

  13. Conserved quantities for stationary Einstein-Maxwell space-times

    International Nuclear Information System (INIS)

    Esposito, F.P.; Witten, L.

    1978-01-01

    It is shown that every stationary Einstein-Maxwell space-time has eight divergence-free vector fields and these are isolated in general form. The vector fields and associated conserved quantities are calculated for several families of space-times. (Auth.)

  14. Approaching space-time through velocity in doubly special relativity

    International Nuclear Information System (INIS)

    Aloisio, R.; Galante, A.; Grillo, A.F.; Luzio, E.; Mendez, F.

    2004-01-01

    We discuss the definition of velocity as dE/d vertical bar p vertical bar, where E, p are the energy and momentum of a particle, in doubly special relativity (DSR). If this definition matches dx/dt appropriate for the space-time sector, then space-time can in principle be built consistently with the existence of an invariant length scale. We show that, within different possible velocity definitions, a space-time compatible with momentum-space DSR principles cannot be derived

  15. Geodesics in Goedel-type space-times

    International Nuclear Information System (INIS)

    Calvao, M.O.; Soares, I.D.; Tiomno, J.

    1988-01-01

    The geodesic curves of the homogeneous Goedel-type space-times, which constitute a two-parameter ({ l and Ω}) class of solutions presented to several theories of gravitation (general relativity, Einstein-Cartan and higher derivative) are investigated. The qualitative properties of those curves by means of the introduction of an effective potential and then accomplish the analytical integration of the equations of motion are examined. It is shown that some of the qualitative features of the free motion in Godel's universe (l 2 =2Ω 2 ) are preserved in all space-times, namely the projections of the geodesics onto the 2-surface (r,ψ) are simple closed curves, and the geodesics for which the ratio of azymuthal angular momentum to total energy, υ is equal to zero always cross the origin r = o. However, two new cases appear: (i) radially unbounded geodesics with υ assuming any (real) value, which may occur only for the causal space-times (l 2 ≥ 4 Ω 2 ), and (ii) geodesics with υ bounded both below and above, which always occur for the circular family (l 2 [pt

  16. Hybrid state-space time integration of rotating beams

    DEFF Research Database (Denmark)

    Krenk, Steen; Nielsen, Martin Bjerre

    2012-01-01

    An efficient time integration algorithm for the dynamic equations of flexible beams in a rotating frame of reference is presented. The equations of motion are formulated in a hybrid state-space format in terms of local displacements and local components of the absolute velocity. With inspiration...... of the system rotation enter via global operations with the angular velocity vector. The algorithm is based on an integrated form of the equations of motion with energy and momentum conserving properties, if a kinematically consistent non-linear formulation is used. A consistent monotonic scheme for algorithmic...... energy dissipation in terms of local displacements and velocities, typical of structural vibrations, is developed and implemented in the form of forward weighting of appropriate mean value terms in the algorithm. The algorithm is implemented for a beam theory with consistent quadratic non...

  17. Geostatistical investigation into the temporal evolution of spatial structure in a shallow water table

    Directory of Open Access Journals (Sweden)

    S. W. Lyon

    2006-01-01

    Full Text Available Shallow water tables near-streams often lead to saturated, overland flow generating areas in catchments in humid climates. While these saturated areas are assumed to be principal biogeochemical hot-spots and important for issues such as non-point pollution sources, the spatial and temporal behavior of shallow water tables, and associated saturated areas, is not completely understood. This study demonstrates how geostatistical methods can be used to characterize the spatial and temporal variation of the shallow water table for the near-stream region. Event-based and seasonal changes in the spatial structure of the shallow water table, which influences the spatial pattern of surface saturation and related runoff generation, can be identified and used in conjunction to characterize the hydrology of an area. This is accomplished through semivariogram analysis and indicator kriging to produce maps combining soft data (i.e., proxy information to the variable of interest representing general shallow water table patterns with hard data (i.e., actual measurements that represent variation in the spatial structure of the shallow water table per rainfall event. The area used was a hillslope in the Catskill Mountains region of New York State. The shallow water table was monitored for a 120 m×180 m near-stream region at 44 sampling locations on 15-min intervals. Outflow of the area was measured at the same time interval. These data were analyzed at a short time interval (15 min and at a long time interval (months to characterize the changes in the hydrologic behavior of the hillslope. Indicator semivariograms based on binary-transformed ground water table data (i.e., 1 if exceeding the time-variable median depth to water table and 0 if not were created for both short and long time intervals. For the short time interval, the indicator semivariograms showed a high degree of spatial structure in the shallow water table for the spring, with increased range

  18. Quantum field theory in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Davies, P C.W. [King' s Coll., London (UK)

    1976-09-30

    It is stated that recent theoretical developments indicate that the presence of gravity (curved space-time) can give rise to important new quantum effects, such as cosmological particle production and black-hole evaporation. These processes suggest intriguing new relations between quantum theory, thermodynamics and space-time structure and encourage the hope that a better understanding of a full quantum theory of gravity may emerge from this approach.

  19. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  20. Dose rate estimates and spatial interpolation maps of outdoor gamma dose rate with geostatistical methods; A case study from Artvin, Turkey

    International Nuclear Information System (INIS)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur

    2015-01-01

    In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.

  1. Space-time structure and the origin of physical law

    International Nuclear Information System (INIS)

    Green, M.A.

    1980-01-01

    In the first part of this theses the author adopts a traditional world view, with space-time a topologically simple geometrical manifold, matter being represented by smooth classical fields, and space a Riemannian submanifold of space-time. It is shown how to characterize the space-time geometry in terms of fields defined on three-dimensional space. Accepting a finite number of the fields induced on space as independent initial data, a procedure is given for constructing dynamical and constraint equations which will propagate these fields forward in time. When the initial data are restricted to include only the hypersurface metric and the extrinsic curvature, the resulting equations combine to form the Einstein gravitational field equations with the cosmological term. The synthesis of gravitational and quantum physics is approached by proposing that the objective world underlying the perceived world is a four-dimensional topological manifold w, with no physically significant field structure and an unconstrianed and complex global topology. Conventional space-time is then a topologically simple replacement manifold for w. A preliminary outline of the correspondence is presented, based on a similarity between a natural graphical representation of 2 and the Feynman graphs of quantum field theory

  2. We live in the quantum 4-dimensional Minkowski space-time

    OpenAIRE

    Hwang, W-Y. Pauchy

    2015-01-01

    We try to define "our world" by stating that "we live in the quantum 4-dimensional Minkowski space-time with the force-fields gauge group $SU_c(3) \\times SU_L(2) \\times U(1) \\times SU_f(3)$ built-in from the outset". We begin by explaining what "space" and "time" are meaning for us - the 4-dimensional Minkowski space-time, then proceeding to the quantum 4-dimensional Minkowski space-time. In our world, there are fields, or, point-like particles. Particle physics is described by the so-called ...

  3. On the structure of space-time caustics

    International Nuclear Information System (INIS)

    Rosquist, K.

    1983-01-01

    Caustics formed by timelike and null geodesics in a space-time M are investigated. Care is taken to distinguish the conjugate points in the tangent space (T-conjugate points) from conjugate points in the manifold (M-conjugate points). It is shown that most nonspacelike conjugate points are regular, i.e. with all neighbouring conjugate points having the same degree of degeneracy. The regular timelike T-conjugate locus is shown to be a smooth 3-dimensional submanifold of the tangent space. Analogously, the regular null T-conjugate locus is shown to be a smooth 2-dimensional submanifold of the light cone in the tangent space. The smoothness properties of the null caustic are used to show that if an observer sees focusing in all directions, then there will necessarily be a cusp in the caustic. If, in addition, all the null conjugate points have maximal degree of degeneracy (as in the closed Friedmann-Robertson-Walker universes), then the space-time is closed. (orig.)

  4. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    Science.gov (United States)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  5. FLRW cosmology in Weyl-integrable space-time

    Energy Technology Data Exchange (ETDEWEB)

    Gannouji, Radouane [Department of Physics, Faculty of Science, Tokyo University of Science, 1–3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan); Nandan, Hemwati [Department of Physics, Gurukula Kangri Vishwavidayalaya, Haridwar 249404 (India); Dadhich, Naresh, E-mail: gannouji@rs.kagu.tus.ac.jp, E-mail: hntheory@yahoo.co.in, E-mail: nkd@iucaa.ernet.in [IUCAA, Post Bag 4, Ganeshkhind, Pune 411 007 (India)

    2011-11-01

    We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.

  6. A comparison of two efficient nonlinear heat conduction methodologies using a two-dimensional time-dependent benchmark problem

    International Nuclear Information System (INIS)

    Wilson, G.L.; Rydin, R.A.; Orivuori, S.

    1988-01-01

    Two highly efficient nonlinear time-dependent heat conduction methodologies, the nonlinear time-dependent nodal integral technique (NTDNT) and IVOHEAT are compared using one- and two-dimensional time-dependent benchmark problems. The NTDNT is completely based on newly developed time-dependent nodal integral methods, whereas IVOHEAT is based on finite elements in space and Crank-Nicholson finite differences in time. IVOHEAT contains the geometric flexibility of the finite element approach, whereas the nodal integral method is constrained at present to Cartesian geometry. For test problems where both methods are equally applicable, the nodal integral method is approximately six times more efficient per dimension than IVOHEAT when a comparable overall accuracy is chosen. This translates to a factor of 200 for a three-dimensional problem having relatively homogeneous regions, and to a smaller advantage as the degree of heterogeneity increases

  7. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Science.gov (United States)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  8. Nuclear disassembly time scales using space time correlations

    International Nuclear Information System (INIS)

    Durand, D.; Colin, J.; Lecolley, J.F.; Meslin, C.; Aboufirassi, M.; Bougault, R.; Brou, R.; Galin, J.; and others.

    1996-01-01

    The lifetime, τ, with respect to multifragmentation of highly excited nuclei is deduced from the analysis of strongly damped Pb+Au collisions at 29 MeV/u. The method is based on the study of space-time correlations induced by 'proximity' effects between fragments emitted by the two primary products of the reaction and gives the time between the re-separation of the two primary products and the subsequent multifragment decay of one partner. (author)

  9. Quantum energy-momentum tensor in space-time with time-like killing vector

    International Nuclear Information System (INIS)

    Frolov, V.P.; Zel'nikov, A.I.

    1987-01-01

    An approximate expression for the vacuum and thermal average μν > ren of the stress-energy tensor of conformal massless fields in static Ricci-flat space-times is constructed. The application of this approximation to the space-time of a Schwarzschild black hole and its relation to the Page-Brown-Ottewill approximation are briefly discussed. (orig.)

  10. Application of Bayesian geostatistics for evaluation of mass discharge uncertainty at contaminated sites

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Lange, Ida Vedel

    2012-01-01

    , and (3) uncertain source zone and transport parameters. The method generates conditional realizations of the spatial flow and concentration distribution. An analytical macrodispersive transport solution is employed to simulate the mean concentration distribution, and a geostatistical model of the Box-Cox...... transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. The method has the advantage of avoiding the heavy computational burden of three-dimensional numerical...

  11. Spaces of positive and negative frequency solutions of field equations in curved space--times. I. The Klein--Gordon equation in stationary space--times

    International Nuclear Information System (INIS)

    Moreno, C.

    1977-01-01

    In stationary space--times V/sub n/ x R with compact space-section manifold without boundary V/sub n/, the Klein--Gordon equation is solved by the one-parameter group of unitary operators generated by the energy operator i -1 T -1 in the Sobolev spaces H/sup l/(V/sub n/) x H/sup l/(V/sub n/). The canonical symplectic and complex structures of the associated dynamical system are calculated. The existence and the uniqueness of the Lichnerowicz kernel are established. The Hilbert spaces of positive and negative frequency-part solutions defined by means of this kernel are constructed

  12. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  13. Nuclear disassembly time scales using space time correlations

    Energy Technology Data Exchange (ETDEWEB)

    Durand, D.; Colin, J.; Lecolley, J.F.; Meslin, C.; Aboufirassi, M.; Bougault, R.; Brou, R. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Bilwes, B.; Cosmo, F. [Strasbourg-1 Univ., 67 (France); Galin, J. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); and others

    1996-09-01

    The lifetime, {tau}, with respect to multifragmentation of highly excited nuclei is deduced from the analysis of strongly damped Pb+Au collisions at 29 MeV/u. The method is based on the study of space-time correlations induced by `proximity` effects between fragments emitted by the two primary products of the reaction and gives the time between the re-separation of the two primary products and the subsequent multifragment decay of one partner. (author). 2 refs.

  14. Space-Time Fractional Diffusion-Advection Equation with Caputo Derivative

    Directory of Open Access Journals (Sweden)

    José Francisco Gómez Aguilar

    2014-01-01

    Full Text Available An alternative construction for the space-time fractional diffusion-advection equation for the sedimentation phenomena is presented. The order of the derivative is considered as 0<β, γ≤1 for the space and time domain, respectively. The fractional derivative of Caputo type is considered. In the spatial case we obtain the fractional solution for the underdamped, undamped, and overdamped case. In the temporal case we show that the concentration has amplitude which exhibits an algebraic decay at asymptotically large times and also shows numerical simulations where both derivatives are taken in simultaneous form. In order that the equation preserves the physical units of the system two auxiliary parameters σx and σt are introduced characterizing the existence of fractional space and time components, respectively. A physical relation between these parameters is reported and the solutions in space-time are given in terms of the Mittag-Leffler function depending on the parameters β and γ. The generalization of the fractional diffusion-advection equation in space-time exhibits anomalous behavior.

  15. Vector mass in curved space-times

    International Nuclear Information System (INIS)

    Maia, M.D.

    The use of the Poincare-symmetry appears to be incompatible with the presence of the gravitational field. The consequent problem of the definition of the mass operator is analysed and an alternative definition based on constant curvature tangent spaces is proposed. In the case where the space-time has no killing vector fields, four independent mass operators can be defined at each point. (Author) [pt

  16. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    Science.gov (United States)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  17. Efficiency of swimming of micro-organism and singularity in shape space

    OpenAIRE

    Kawamura, Masako

    1996-01-01

    Micro-organisms can be classified into three different types according to their size. We study the efficiency of the swimming of micro-organism in two dimensional fluid as a device for helping the explanation of this hierarchy in the size. We show that the efficiency of flagellate becomes unboundedly large, whereas that of ciliate has the upper bound. The unboundedness is related to the curious feature of the shape space, that is, a singularity at the basic shape of flagellate.

  18. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    International Nuclear Information System (INIS)

    Lovejoy, S.; Lima, M. I. P. de

    2015-01-01

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time

  19. ''Free-space'' boundary conditions for the time-dependent wave equation

    International Nuclear Information System (INIS)

    Lindman, E.L.

    1975-01-01

    Boundary conditions for the discrete wave equation which act like an infinite region of free space in contact with the computational region can be constructed using projection operators. Propagating and evanescent waves coming from within the computational region generate no reflected waves as they cross the boundary. At the same time arbitrary waves may be launched into the computational region. Well known projection operators for one-dimensional waves may be used for this purpose in one dimension. Extensions of these operators to higher dimensions along with numerically efficient approximations to them are described for higher-dimensional problems. The separation of waves into ingoing and outgoing waves inherent in these boundary conditions greatly facilitates diagnostics

  20. Is the shell-focusing singularity of Szekeres space-time visible?

    International Nuclear Information System (INIS)

    Nolan, Brien C; Debnath, Ujjal

    2007-01-01

    The visibility of the shell-focusing singularity in Szekeres space-time--which represents quasispherical dust collapse--has been studied on numerous occasions in the context of the cosmic censorship conjecture. The various results derived have assumed that there exist radial null geodesics in the space-time. We show that such geodesics do not exist in general, and so previous results on the visibility of the singularity are not generally valid. More precisely, we show that the existence of a radial geodesic in Szekeres space-time implies that the space-time is axially symmetric, with the geodesic along the polar direction (i.e. along the axis of symmetry). If there is a second nonparallel radial geodesic, then the space-time is spherically symmetric, and so is a Lemaitre-Tolman-Bondi space-time. For the case of the polar geodesic in an axially symmetric Szekeres space-time, we give conditions on the free functions (i.e. initial data) of the space-time which lead to visibility of the singularity along this direction. Likewise, we give a sufficient condition for censorship of the singularity. We point out the complications involved in addressing the question of visibility of the singularity both for nonradial null geodesics in the axially symmetric case and in the general (nonaxially symmetric) case, and suggest a possible approach

  1. Space, time and causality

    International Nuclear Information System (INIS)

    Lucas, J.R.

    1984-01-01

    Originating from lectures given to first year undergraduates reading physics and philosophy or mathematics and philosophy, formal logic is applied to issues and the elucidation of problems in space, time and causality. No special knowledge of relativity theory or quantum mechanics is needed. The text is interspersed with exercises and each chapter is preceded by a suggested 'preliminary reading' and followed by 'further reading' references. (U.K.)

  2. Quantum theory of spinor field in four-dimensional Riemannian space-time

    International Nuclear Information System (INIS)

    Shavokhina, N.S.

    1996-01-01

    The review deals with the spinor field in the four-dimensional Riemannian space-time. The field beys the Dirac-Fock-Ivanenko equation. Principles of quantization of the spinor field in the Riemannian space-time are formulated which in a particular case of the plane space-time are equivalent to the canonical rules of quantization. The formulated principles are exemplified by the De Sitter space-time. The study of quantum field theory in the De Sitter space-time is interesting because it itself leads to a method of an invariant well for plane space-time. However, the study of the quantum spinor field theory in an arbitrary Riemannian space-time allows one to take into account the influence of the external gravitational field on the quantized spinor field. 60 refs

  3. Electromagnetic-field equations in the six-dimensional space-time R6

    International Nuclear Information System (INIS)

    Teli, M.T.; Palaskar, D.

    1984-01-01

    Maxwell's equations (without monopoles) for electromagnetic fields are obtained in six-dimensional space-time. The equations possess structural symmetry in space and time, field and source densities. Space-time-symmetric conservation laws and field solutions are obtained. The results are successfully correlated with their four-dimensional space-time counterparts

  4. Point-like Particles in Fuzzy Space-time

    OpenAIRE

    Francis, Charles

    1999-01-01

    This paper is withdrawn as I am no longer using the term "fuzzy space- time" to describe the uncertainty in co-ordinate systems implicit in quantum logic. Nor am I using the interpretation that quantum logic can be regarded as a special case of fuzzy logic. This is because there are sufficient differences between quantum logic and fuzzy logic that the explanation is confusing. I give an interpretation of quantum logic in "A Theory of Quantum Space-time"

  5. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  6. Combined effects of space charge and energetic disorder on photocurrent efficiency loss of field-dependent organic photovoltaic devices

    International Nuclear Information System (INIS)

    Yoon, Sangcheol; Hwang, Inchan; Park, Byoungchoo

    2015-01-01

    The loss of photocurrent efficiency by space-charge effects in organic solar cells with energetic disorder was investigated to account for how energetic disorder incorporates space-charge effects, utilizing a drift-diffusion model with field-dependent charge-pair dissociation and suppressed bimolecular recombination. Energetic disorder, which induces the Poole–Frenkel behavior of charge carrier mobility, is known to decrease the mobility of charge carriers and thus reduces photovoltaic performance. We found that even if the mobilities are the same in the absence of space-charge effects, the degree of energetic disorder can be an additional parameter affecting photocurrent efficiency when space-charge effects occur. Introducing the field-dependence parameter that reflects the energetic disorder, the behavior of efficiency loss with energetic disorder can differ depending on which charge carrier is subject to energetic disorder. While the energetic disorder that is applied to higher-mobility charge carriers decreases photocurrent efficiency further, the efficiency loss can be suppressed when energetic disorder is applied to lower-mobility charge carriers. (paper)

  7. Geostatistical modeling of a fluviodeltaic reservoir in the Huyapari Field, Hamaca area, in the Faja Petrolifera del Orinoco, Venezuela

    Energy Technology Data Exchange (ETDEWEB)

    De Ascencao, Erika M.; Munckton, Toni; Digregorio, Ricardo [Petropiar (Venezuela)

    2011-07-01

    The Huyapari field, situated within the Faja Petrolifera del Orinoco (FPO) of Venezuela presents unique problems in terms of modeling. This field is spread over a wide area and is therefore subject to variable oil quality and complex fluvial facies architecture. Ameriven and PDVSA have been working on characterizing the ld's reservoirs in this field since 2000 and the aim of this paper is to present these efforts. Among others, a 3-D seismic survey completed in 1998 and a stratigraphic framework built from 149 vertical wells were used for reservoir characterization. Geostatistical techniques such as sequential Gaussian simulation with locally varying mean and cloud transform were also used. Results showed that these geostatistical methods accurately represented the architecture and properties of the reservoir and its fluid distribution. This paper showed that the application of numerous different techniques in the Hamasca area permitted reservoir complexity to be captured.

  8. Determining site-specific background level with geostatistics for remediation of heavy metals in neighborhood soils

    OpenAIRE

    Tammy M. Milillo; Gaurav Sinha; Joseph A. Gardella Jr.

    2017-01-01

    The choice of a relevant, uncontaminated site for the determination of site-specific background concentrations for pollutants is critical for planning remediation of a contaminated site. The guidelines used to arrive at concentration levels vary from state to state, complicating this process. The residential neighborhood of Hickory Woods in Buffalo, NY is an area where heavy metal concentrations and spatial distributions were measured to plan remediation. A novel geostatistics based decision ...

  9. Indoor radon variations in central Iran and its geostatistical map

    Science.gov (United States)

    Hadad, Kamal; Mokhtari, Javad

    2015-02-01

    We present the results of 2 year indoor radon survey in 10 cities of Yazd province in Central Iran (covering an area of 80,000 km2). We used passive diffusive samplers with LATEX polycarbonate films as Solid State Nuclear Track Detector (SSNTD). This study carried out in central Iran where there are major minerals and uranium mines. Our results indicate that despite few extraordinary high concentrations, average annual concentrations of indoor radon are within ICRP guidelines. When geostatistical spatial distribution of radon mapped onto geographical features of the province it was observed that risk of high radon concentration increases near the Saqand, Bafq, Harat and Abarkooh cities, this depended on the elevation and vicinity of the ores and mines.

  10. Geostatistical modeling of groundwater properties and assessment of their uncertainties

    International Nuclear Information System (INIS)

    Honda, Makoto; Yamamoto, Shinya; Sakurai, Hideyuki; Suzuki, Makoto; Sanada, Hiroyuki; Matsui, Hiroya; Sugita, Yutaka

    2010-01-01

    The distribution of groundwater properties is important for understanding of the deep underground hydrogeological environments. This paper proposes a geostatistical system for modeling the groundwater properties which have a correlation with the ground resistivity data obtained from widespread and exhaustive survey. That is, the methodology for the integration of resistivity data measured by various methods and the methodology for modeling the groundwater properties using the integrated resistivity data has been developed. The proposed system has also been validated using the data obtained in the Horonobe Underground Research Laboratory project. Additionally, the quantification of uncertainties in the estimated model has been tried by numerical simulations based on the data. As a result, the uncertainties of the proposal model have been estimated lower than other traditional model's. (author)

  11. The use of sequential indicator simulation to characterize geostatistical uncertainty

    International Nuclear Information System (INIS)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds

  12. Efficient solid rocket propulsion for access to space

    Science.gov (United States)

    Maggi, Filippo; Bandera, Alessio; Galfetti, Luciano; De Luca, Luigi T.; Jackson, Thomas L.

    2010-06-01

    Space launch activity is expected to grow in the next few years in order to follow the current trend of space exploitation for business purpose. Granting high specific thrust and volumetric specific impulse, and counting on decades of intense development, solid rocket propulsion is a good candidate for commercial access to space, even with common propellant formulations. Yet, some drawbacks such as low theoretical specific impulse, losses as well as safety issues, suggest more efficient propulsion systems, digging into the enhancement of consolidated techniques. Focusing the attention on delivered specific impulse, a consistent fraction of losses can be ascribed to the multiphase medium inside the nozzle which, in turn, is related to agglomeration; a reduction of agglomerate size is likely. The present paper proposes a model based on heterogeneity characterization capable of describing the agglomeration trend for a standard aluminized solid propellant formulation. Material microstructure is characterized through the use of two statistical descriptors (pair correlation function and near-contact particles) looking at the mean metal pocket size inside the bulk. Given the real formulation and density of a propellant, a packing code generates the material representative which is then statistically analyzed. Agglomerate predictions are successfully contrasted to experimental data at 5 bar for four different formulations.

  13. Mach's principle and space-time structure

    International Nuclear Information System (INIS)

    Raine, D.J.

    1981-01-01

    Mach's principle, that inertial forces should be generated by the motion of a body relative to the bulk of matter in the universe, is shown to be related to the structure imposed on space-time by dynamical theories. General relativity theory and Mach's principle are both shown to be well supported by observations. Since Mach's principle is not contained in general relativity this leads to a discussion of attempts to derive Machian theories. The most promising of these appears to be a selection rule for solutions of the general relativistic field equations, in which the space-time metric structure is generated by the matter content of the universe only in a well-defined way. (author)

  14. Ghost neutrinos as test fields in curved space-time

    International Nuclear Information System (INIS)

    Audretsch, J.

    1976-01-01

    Without restricting to empty space-times, it is shown that ghost neutrinos (their energy-momentum tensor vanishes) can only be found in algebraically special space-times with a neutrino flux vector parallel to one of the principal null vectors of the conformal tensor. The optical properties are studied. There are no ghost neutrinos in the Kerr-Newman and in spherically symmetric space-times. The example of a non-vacuum gravitational pp-wave accompanied by a ghost neutrino pp-wave is discussed. (Auth.)

  15. Space Efficient Data Structures for N-gram Retrieval

    Directory of Open Access Journals (Sweden)

    Fotios Kounelis

    2017-10-01

    Full Text Available A significant problem in computer science is the management of large data strings and a great number of works dealing with the specific problem has been published in the scientific literature. In this article, we use a technique to store efficiently biological sequences, making use of data structures like suffix trees and inverted files and also employing techniques like n-grams, in order to improve previous constructions. In our attempt, we drastically reduce the space needed to store the inverted indexes, by representing the substrings that appear more frequently in a more compact inverted index. Our technique is based on n-gram indexing, providing us the extra advantage of indexing sequences that cannot be separated in words. Moreover, our technique combines classical one level with two-level n-gram inverted file indexing. Our results suggest that the new proposed algorithm can compress the data more efficiently than previous attempts.

  16. Transversal changes, space closure, and efficiency of conventional and self-ligating appliances : A quantitative systematic review.

    Science.gov (United States)

    Yang, Xianrui; Xue, Chaoran; He, Yiruo; Zhao, Mengyuan; Luo, Mengqi; Wang, Peiqi; Bai, Ding

    2018-01-01

    Self-ligating brackets (SLBs) were compared to conventional brackets (CBs) regarding their effectiveness on transversal changes and space closure, as well as the efficiency of alignment and treatment time. All previously published randomized controlled clinical trials (RCTs) dealing with SLBs and CBs were searched via electronic databases, e.g., MEDLINE, Cochrane Central Register of Controlled Trials, EMBASE, World Health Organization International Clinical Trials Registry Platform, Chinese Biomedical Literature Database, and China National Knowledge Infrastructure. In addition, relevant journals were searched manually. Data extraction was performed independently by two reviewers and assessment of the risk of bias was executed using Cochrane Collaboration's tool. Discrepancies were resolved by discussion with a third reviewer. Meta-analyses were conducted using Review Manager (version 5.3). A total of 976 patients in 17 RCTs were included in the study, of which 11 could be produced quantitatively and 2 showed a low risk of bias. Meta-analyses were found to favor CB for mandibular intercanine width expansion, while passive SLBs were more effective in posterior expansion. Moreover, CBs had an apparent advantage during short treatment periods. However, SLBs and CBs did not differ in closing spaces. Based on current clinical evidence obtained from RCTs, SLBs do not show clinical superiority compared to CBs in expanding transversal dimensions, space closure, or orthodontic efficiency. Further high-level studies involving randomized, controlled, clinical trials are warranted to confirm these results.

  17. Space-time modeling of soil moisture

    Science.gov (United States)

    Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio

    2017-11-01

    A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.

  18. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    Science.gov (United States)

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

  19. An evaluation of space time cube representation of spatiotemporal patterns.

    Science.gov (United States)

    Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine

    2009-01-01

    Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

  20. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    Energy Technology Data Exchange (ETDEWEB)

    Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-11-15

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.

  1. Efficient computation of spaced seeds

    Directory of Open Access Journals (Sweden)

    Ilie Silvana

    2012-02-01

    Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.

  2. Space-time wind speed forecasting for improved power system dispatch

    KAUST Repository

    Zhu, Xinxin

    2014-02-27

    To support large-scale integration of wind power into electric energy systems, state-of-the-art wind speed forecasting methods should be able to provide accurate and adequate information to enable efficient, reliable, and cost-effective scheduling of wind power. Here, we incorporate space-time wind forecasts into electric power system scheduling. First, we propose a modified regime-switching, space-time wind speed forecasting model that allows the forecast regimes to vary with the dominant wind direction and with the seasons, hence avoiding a subjective choice of regimes. Then, results from the wind forecasts are incorporated into a power system economic dispatch model, the cost of which is used as a loss measure of the quality of the forecast models. This, in turn, leads to cost-effective scheduling of system-wide wind generation. Potential economic benefits arise from the system-wide generation of cost savings and from the ancillary service cost savings. We illustrate the economic benefits using a test system in the northwest region of the United States. Compared with persistence and autoregressive models, our model suggests that cost savings from integration of wind power could be on the scale of tens of millions of dollars annually in regions with high wind penetration, such as Texas and the Pacific northwest. © 2014 Sociedad de Estadística e Investigación Operativa.

  3. Optimal Time-Space Trade-Offs for Non-Comparison-Based Sorting

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Pagter, Jacob Illeborg

    2002-01-01

    We study the problem of sorting n integers of w bits on a unit-cost RAM with word size w, and in particular consider the time-space trade-off (product of time and space in bits) for this problem. For comparison-based algorithms, the time-space complexity is known to be Θ(n2). A result of Beame...... shows that the lower bound also holds for non-comparison-based algorithms, but no algorithm has met this for time below the comparison-based Ω(nlgn) lower bound.We show that if sorting within some time bound &Ttilde; is possible, then time T = O(&Ttilde; + nlg* n) can be achieved with high probability...... using space S = O(n2/T + w), which is optimal. Given a deterministic priority queue using amortized time t(n) per operation and space nO(1), we provide a deterministic algorithm sorting in time T = O(n(t(n) + lg* n)) with S = O(n2/T + w). Both results require that w ≤ n1-Ω(1). Using existing priority...

  4. Charged fluid distribution in higher dimensional spheroidal space-time

    Indian Academy of Sciences (India)

    A general solution of Einstein field equations corresponding to a charged fluid distribution on the background of higher dimensional spheroidal space-time is obtained. The solution generates several known solutions for superdense star having spheroidal space-time geometry.

  5. Constant scalar curvature hypersurfaces in extended Schwarzschild space-time

    International Nuclear Information System (INIS)

    Pareja, M. J.; Frauendiener, J.

    2006-01-01

    We present a class of spherically symmetric hypersurfaces in the Kruskal extension of the Schwarzschild space-time. The hypersurfaces have constant negative scalar curvature, so they are hyperboloidal in the regions of space-time which are asymptotically flat

  6. A stochastic space-time model for intermittent precipitation occurrences

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2016-01-01

    Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.

  7. A stochastic space-time model for intermittent precipitation occurrences

    KAUST Repository

    Sun, Ying

    2016-01-28

    Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.

  8. Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2017-01-01

    Full Text Available The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE and refined MSE (RMSE measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR stochastic processes. The method makes use of linear state-space (SS models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.

  9. A Space-Time Periodic Task Model for Recommendation of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Xiuhong Zhang

    2018-01-01

    Full Text Available With the rapid development of remote sensing technology, the quantity and variety of remote sensing images are growing so quickly that proactive and personalized access to data has become an inevitable trend. One of the active approaches is remote sensing image recommendation, which can offer related image products to users according to their preference. Although multiple studies on remote sensing retrieval and recommendation have been performed, most of these studies model the user profiles only from the perspective of spatial area or image features. In this paper, we propose a spatiotemporal recommendation method for remote sensing data based on the probabilistic latent topic model, which is named the Space-Time Periodic Task model (STPT. User retrieval behaviors of remote sensing images are represented as mixtures of latent tasks, which act as links between users and images. Each task is associated with the joint probability distribution of space, time and image characteristics. Meanwhile, the von Mises distribution is introduced to fit the distribution of tasks over time. Then, we adopt Gibbs sampling to learn the random variables and parameters and present the inference algorithm for our model. Experiments show that the proposed STPT model can improve the capability and efficiency of remote sensing image data services.

  10. The edge of space time

    International Nuclear Information System (INIS)

    Hawking, S.

    1993-01-01

    What happened at the beginning of the expansion of the universe. Did space time have an edge at the Big Bang. The answer is that, if the boundary conditions of the universe are that it has no boundary, time ceases to be well-defined in the very early universe as the direction ''north'' ceases to be well defined at the North Pole of the Earth. The quantity that we measure as time has a beginning but that does not mean spacetime has an edge, just as the surface of the Earth does not have an edge at the North Pole. 8 figs

  11. Holographic analysis of dispersive pupils in space--time optics

    International Nuclear Information System (INIS)

    Calatroni, J.; Vienot, J.C.

    1981-01-01

    Extension of space--time optics to objects whose transparency is a function of the temporal frequency v = c/lambda is examined. Considering the effects of such stationary pupils on white light waves, they are called temporal pupils. It is shown that simultaneous encoding both in the space and time frequency domains is required to record pupil parameters. The space-time impulse response and transfer functions are calculated for a dispersive nonabsorbent material. An experimental method providing holographic recording of the dispersion curve of any transparent material is presented

  12. Holographic analysis of dispersive pupils in space--time optics

    Energy Technology Data Exchange (ETDEWEB)

    Calatroni, J.; Vienot, J.C.

    1981-06-01

    Extension of space--time optics to objects whose transparency is a function of the temporal frequency v = c/lambda is examined. Considering the effects of such stationary pupils on white light waves, they are called temporal pupils. It is shown that simultaneous encoding both in the space and time frequency domains is required to record pupil parameters. The space-time impulse response and transfer functions are calculated for a dispersive nonabsorbent material. An experimental method providing holographic recording of the dispersion curve of any transparent material is presented.

  13. Test Equal Bending by Gravity for Space and Time

    Science.gov (United States)

    Sweetser, Douglas

    2009-05-01

    For the simplest problem of gravity - a static, non-rotating, spherically symmetric source - the solution for spacetime bending around the Sun should be evenly split between time and space. That is true to first order in M/R, and confirmed by experiment. At second order, general relativity predicts different amounts of contribution from time and space without a physical justification. I show an exponential metric is consistent with light bending to first order, measurably different at second order. All terms to all orders show equal contributions from space and time. Beautiful minimalism is Nature's way.

  14. The theory of space, time and gravitation

    CERN Document Server

    Fock, V

    2015-01-01

    The Theory of Space, Time, and Gravitation, 2nd Revised Edition focuses on Relativity Theory and Einstein's Theory of Gravitation and correction of the misinterpretation of the Einsteinian Gravitation Theory. The book first offers information on the theory of relativity and the theory of relativity in tensor form. Discussions focus on comparison of distances and lengths in moving reference frames; comparison of time differences in moving reference frames; position of a body in space at a given instant in a fixed reference frame; and proof of the linearity of the transformation linking two iner

  15. Finite element method for time-space-fractional Schrodinger equation

    Directory of Open Access Journals (Sweden)

    Xiaogang Zhu

    2017-07-01

    Full Text Available In this article, we develop a fully discrete finite element method for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.

  16. Differential Space-Time Block Code Modulation for DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Liu Jianhua

    2002-01-01

    Full Text Available A differential space-time block code (DSTBC modulation scheme is used to improve the performance of DS-CDMA systems in fast time-dispersive fading channels. The resulting scheme is referred to as the differential space-time block code modulation for DS-CDMA (DSTBC-CDMA systems. The new modulation and demodulation schemes are especially studied for the down-link transmission of DS-CDMA systems. We present three demodulation schemes, referred to as the differential space-time block code Rake (D-Rake receiver, differential space-time block code deterministic (D-Det receiver, and differential space-time block code deterministic de-prefix (D-Det-DP receiver, respectively. The D-Det receiver exploits the known information of the spreading sequences and their delayed paths deterministically besides the Rake type combination; consequently, it can outperform the D-Rake receiver, which employs the Rake type combination only. The D-Det-DP receiver avoids the effect of intersymbol interference and hence can offer better performance than the D-Det receiver.

  17. Relativistic helicity and link in Minkowski space-time

    International Nuclear Information System (INIS)

    Yoshida, Z.; Kawazura, Y.; Yokoyama, T.

    2014-01-01

    A relativistic helicity has been formulated in the four-dimensional Minkowski space-time. Whereas the relativistic distortion of space-time violates the conservation of the conventional helicity, the newly defined relativistic helicity conserves in a barotropic fluid or plasma, dictating a fundamental topological constraint. The relation between the helicity and the vortex-line topology has been delineated by analyzing the linking number of vortex filaments which are singular differential forms representing the pure states of Banach algebra. While the dimension of space-time is four, vortex filaments link, because vorticities are primarily 2-forms and the corresponding 2-chains link in four dimension; the relativistic helicity measures the linking number of vortex filaments that are proper-time cross-sections of the vorticity 2-chains. A thermodynamic force yields an additional term in the vorticity, by which the vortex filaments on a reference-time plane are no longer pure states. However, the vortex filaments on a proper-time plane remain to be pure states, if the thermodynamic force is exact (barotropic), thus, the linking number of vortex filaments conserves

  18. Mathematical aspects of the discrete space-time hypothesis

    International Nuclear Information System (INIS)

    Sardanashvili, G.A.

    1979-01-01

    A hypothesis of a microcosm space discreteness is considered from the theoretical-mathematical point of view. The type of topological spaces, which formalizes representations on the discrete space-time, is determined. It is explained, how these spaces arise in physical models. The physical task, in which the discrete space could arise as a version of its solution, is considered. It is shown that the discrete structure of space can arise with a certain interaction type in the system, for example, with its considerable self-shielding, which can take place, in particular, in the particles or in the cosmological and astrophysical singularities

  19. Time-dependent gravitating solitons in five dimensional warped space-times

    CERN Document Server

    Giovannini, Massimo

    2007-01-01

    Time-dependent soliton solutions are explicitly derived in a five-dimensional theory endowed with one (warped) extra-dimension. Some of the obtained geometries, everywhere well defined and technically regular, smoothly interpolate between two five-dimensional anti-de Sitter space-times for fixed value of the conformal time coordinate. Time dependent solutions containing both topological and non-topological sectors are also obtained. Supplementary degrees of freedom can be also included and, in this case, the resulting multi-soliton solutions may describe time-dependent kink-antikink systems.

  20. "Efficiency Space" - A Framework for Evaluating Joint Evaporation and Runoff Behavior

    Science.gov (United States)

    Koster, Randal

    2014-01-01

    At the land surface, higher soil moisture levels generally lead to both increased evaporation for a given amount of incoming radiation (increased evaporation efficiency) and increased runoff for a given amount of precipitation (increased runoff efficiency). Evaporation efficiency and runoff efficiency can thus be said to vary with each other, motivating the development of a unique hydroclimatic analysis framework. Using a simple water balance model fitted, in different experiments, with a wide variety of functional forms for evaporation and runoff efficiency, we transform net radiation and precipitation fields into fields of streamflow that can be directly evaluated against observations. The optimal combination of the functional forms the combination that produces the most skillful stream-flow simulations provides an indication for how evaporation and runoff efficiencies vary with each other in nature, a relationship that can be said to define the overall character of land surface hydrological processes, at least to first order. The inferred optimal relationship is represented herein as a curve in efficiency space and should be valuable for the evaluation and development of GCM-based land surface models, which by this measure are often found to be suboptimal.

  1. Real Time Space Weather Support for Chandra X-Ray Observatory Operations

    Science.gov (United States)

    O'Dell, Stephen L.; Minow, Joseph I.; Miller, J. Scott; Wolk, Scott J.; Aldcroft, Thomas L.; Spitzbart, Bradley D.; Swartz. Douglas A.

    2012-01-01

    NASA launched the Chandra X-ray Observatory in July 1999. Soon after first light in August 1999, however, degradation in the energy resolution and charge transfer efficiency of the Advanced CCD Imaging Spectrometer (ACIS) x-ray detectors was observed. The source of the degradation was quickly identified as radiation damage in the charge-transfer channel of the front-illuminated CCDs, by weakly penetrating ( soft , 100 500 keV) protons as Chandra passed through the Earth s radiation belts and ring currents. As soft protons were not considered a risk to spacecraft health before launch, the only on-board radiation monitoring system is the Electron, Proton, and Helium Instrument (EPHIN) which was included on Chandra with the primary purpose of monitoring energetic solar particle events. Further damage to the ACIS detector has been successfully mitigated through a combination of careful mission planning, autonomous on-board radiation protection, and manual intervention based upon real-time monitoring of the soft-proton environment. The AE-8 and AP-8 trapped radiation models and Chandra Radiation Models are used to schedule science operations in regions of low proton flux. EPHIN has been used as the primary autonomous in-situ radiation trigger; but, it is not sensitive to the soft protons that damage the front-illuminated CCDs. Monitoring of near-real-time space weather data sources provides critical information on the proton environment outside the Earth s magnetosphere due to solar proton events and other phenomena. The operations team uses data from the Geostationary Operational Environmental Satellites (GOES) to provide near-real-time monitoring of the proton environment; however, these data do not give a representative measure of the soft-proton (real-time data provided by NOAA s Space Weather Prediction Center. This presentation describes the radiation mitigation strategies to minimize the proton damage in the ACIS CCD detectors and the importance of real-time data

  2. Space-time algebra for the generalization of gravitational field

    Indian Academy of Sciences (India)

    The Maxwell–Proca-like field equations of gravitolectromagnetism are formulated using space-time algebra (STA). The gravitational wave equation with massive gravitons and gravitomagnetic monopoles has been derived in terms of this algebra. Using space-time algebra, the most generalized form of ...

  3. Quantum Dynamics of Test Particle in Curved Space-Time

    International Nuclear Information System (INIS)

    Piechocki, W.

    2002-01-01

    To reveal the nature of space-time singularities of removable type we examine classical and quantum dynamics of a free particle in the Sitter type spacetimes. Consider space-times have different topologies otherwise are isometric. Our systems are integrable and we present analytic solutions of the classical dynamics. We quantize the systems by making use of the group theoretical method: we find an essentially self-adjoint representation of the algebra of observables integrable to the irreducible unitarity representation of the symmetry group of each consider gravitational system. The massless particle dynamics is obtained in the zero-mass limit of the massive case. Global properties of considered gravitational systems are of primary importance for the quantization procedure. Systems of a particle in space-times with removable singularities appear to be quantizable. We give specific proposal for extension of our analysis to space-times with essential type singularities. (author)

  4. Geostatistical validation and cross-validation of magnetometric measurements of soil pollution with Potentially Toxic Elements in problematic areas

    Science.gov (United States)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2016-04-01

    Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.

  5. TU-AB-BRC-07: Efficiency of An IAEA Phase-Space Source for a Low Energy X-Ray Tube Using Egs++

    Energy Technology Data Exchange (ETDEWEB)

    Watson, PGF; Renaud, MA; Seuntjens, J [McGill University, Montreal, Quebec (Canada)

    2016-06-15

    Purpose: To extend the capability of the EGSnrc C++ class library (egs++) to write and read IAEA phase-space files as a particle source, and to assess the relative efficiency gain in dose calculation using an IAEA phase-space source for modelling a miniature low energy x-ray source. Methods: We created a new ausgab object to score particles exiting a user-defined geometry and write them to an IAEA phase-space file. A new particle source was created to read from IAEA phase-space data. With these tools, a phase-space file was generated for particles exiting a miniature 50 kVp x-ray tube (The INTRABEAM System, Carl Zeiss). The phase-space source was validated by comparing calculated PDDs with a full electron source simulation of the INTRABEAM. The dose calculation efficiency gain of the phase-space source was determined relative to the full simulation. The efficiency gain as a function of i) depth in water, and ii) job parallelization was investigated. Results: The phase-space and electron source PDDs were found to agree to 0.5% RMS, comparable to statistical uncertainties. The use of a phase-space source for the INTRABEAM led to a relative efficiency gain of greater than 20 over the full electron source simulation, with an increase of up to a factor of 196. The efficiency gain was found to decrease with depth in water, due to the influence of scattering. Job parallelization (across 2 to 256 cores) was not found to have any detrimental effect on efficiency gain. Conclusion: A set of tools has been developed for writing and reading IAEA phase-space files, which can be used with any egs++ user code. For simulation of a low energy x-ray tube, the use of a phase-space source was found to increase the relative dose calculation efficiency by factor of up to 196. The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant No. 432290).

  6. Classical field theory in the space of reference frames. [Space-time manifold, action principle

    Energy Technology Data Exchange (ETDEWEB)

    Toller, M [Dipartimento di Matematica e Fisica, Libera Universita, Trento (Italy)

    1978-03-11

    The formalism of classical field theory is generalized by replacing the space-time manifold M by the ten-dimensional manifold S of all the local reference frames. The geometry of the manifold S is determined by ten vector fields corresponding to ten operationally defined infinitesimal transformations of the reference frames. The action principle is written in terms of a differential 4-form in the space S (the Lagrangian form). Densities and currents are represented by differential 3-forms in S. The field equations and the connection between symmetries and conservation laws (Noether's theorem) are derived from the action principle. Einstein's theory of gravitation and Maxwell's theory of electromagnetism are reformulated in this language. The general formalism can also be used to formulate theories in which charge, energy and momentum cannot be localized in space-time and even theories in which a space-time manifold cannot be defined exactly in any useful way.

  7. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  8. Reservoir Characterization using geostatistical and numerical modeling in GIS with noble gas geochemistry

    Science.gov (United States)

    Vasquez, D. A.; Swift, J. N.; Tan, S.; Darrah, T. H.

    2013-12-01

    The integration of precise geochemical analyses with quantitative engineering modeling into an interactive GIS system allows for a sophisticated and efficient method of reservoir engineering and characterization. Geographic Information Systems (GIS) is utilized as an advanced technique for oil field reservoir analysis by combining field engineering and geological/geochemical spatial datasets with the available systematic modeling and mapping methods to integrate the information into a spatially correlated first-hand approach in defining surface and subsurface characteristics. Three key methods of analysis include: 1) Geostatistical modeling to create a static and volumetric 3-dimensional representation of the geological body, 2) Numerical modeling to develop a dynamic and interactive 2-dimensional model of fluid flow across the reservoir and 3) Noble gas geochemistry to further define the physical conditions, components and history of the geologic system. Results thus far include using engineering algorithms for interpolating electrical well log properties across the field (spontaneous potential, resistivity) yielding a highly accurate and high-resolution 3D model of rock properties. Results so far also include using numerical finite difference methods (crank-nicholson) to solve for equations describing the distribution of pressure across field yielding a 2D simulation model of fluid flow across reservoir. Ongoing noble gas geochemistry results will also include determination of the source, thermal maturity and the extent/style of fluid migration (connectivity, continuity and directionality). Future work will include developing an inverse engineering algorithm to model for permeability, porosity and water saturation.This combination of new and efficient technological and analytical capabilities is geared to provide a better understanding of the field geology and hydrocarbon dynamics system with applications to determine the presence of hydrocarbon pay zones (or

  9. Momentum and angular momentum in the H-space of asymptotically flat, Einstein-Maxwell space-time

    International Nuclear Information System (INIS)

    Hallidy, W.; Ludvigsen, M.

    1979-01-01

    New definitions are proposed for the momentum and angular momentum of Einstein-Maxwell fields that overcome the deficiencies of earlier definitions of these terms and are appropriate to the new H-space formulations of space-time. Definitions are made in terms of the Winicour-Tamburino linkages applied to the good cuts of Cj + . The transformations between good cuts then correspond to the translations and Lorentz transformations at points in H-space. For the special case of Robinson-Trautman type II space-times, it is shown that the definitions of momentum and angular momentum yield previously published results. (author)

  10. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  11. Spatial distribution of Aedes aegypti (Diptera: Culicidae in the rural area of two municipalities of Cundinamarca, Colombia

    Directory of Open Access Journals (Sweden)

    Laura Cabezas

    2017-03-01

    Conclusion: This study shows the importance of geostatistics for the surveillance of vector-borne diseases and the analysis of time and space dynamics of vector insects and of diseases transmitted by them.

  12. Development of Adiabatic Doppler Feedback Model in 3D space time analysis Code ARCH

    International Nuclear Information System (INIS)

    Dwivedi, D.K.; Gupta, Anurag

    2015-01-01

    Integrated 3D space-time neutron kinetics with thermal-hydraulic feedback code system is being developed for transient analysis of Compact High Temperature Reactor (CHTR) and Advanced Heavy Water Reactor (AHWR). ARCH (code for Analysis of Reactor transients in Cartesian and Hexagon geometries) has been developed with IQS module for efficient 3D space time analysis. Recently, an adiabatic Doppler (fuel temperature) feedback module has been incorporated in this ARCH-IQS version of tile code. In the adiabatic model of fuel temperature feedback, the transfer of the excess heat from the fuel to the coolant during transient is neglected. The viability of Doppler feedback in ARCH-IQS with adiabatic heating has been checked with AER benchmark (Dyn002). Analyses of anticipated transient without scram (ATWS) case in CHTR as well as in AHWR have been performed with adiabatic fuel temperature feedback. The methodology and results have been presented in this paper. (author)

  13. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Flipo, Nicolas [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)]. E-mail: nicolas.flipo@ensmp.fr; Jeannee, Nicolas [Geovariances, 49 bis, avenue Franklin Roosevelt, F-77212 Avon (France); Poulin, Michel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Even, Stephanie [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Ledoux, Emmanuel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)

    2007-03-15

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km{sup 2}), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L{sup -1} yr{sup -1}, resulting from an average infiltration flux of 3500 kgN.km{sup -2} yr{sup -1}. - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems.

  14. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    International Nuclear Information System (INIS)

    Flipo, Nicolas; Jeannee, Nicolas; Poulin, Michel; Even, Stephanie; Ledoux, Emmanuel

    2007-01-01

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km 2 ), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L -1 yr -1 , resulting from an average infiltration flux of 3500 kgN.km -2 yr -1 . - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems

  15. Energy in the Kantowski–Sachs space-time using teleparallel ...

    Indian Academy of Sciences (India)

    Energy in the Kantowski–Sachs space-time using teleparallel geometry ... Kantowski–Sachs metric; teleparallelism; gravitational energy. Abstract. The purpose of this paper is to examine the energy content of the inflationary Universe described by Kantowski–Sachs space-time in quasilocal approach of teleparallel gravity ...

  16. Topology and isometries of the de Sitter space-time

    International Nuclear Information System (INIS)

    Mitskevich, N.V.; Senin, Yu.E.

    1982-01-01

    Spaces with a constant four-dimensional curvature, which are locally isometric to the de Sitter space-time but differing from it in topology are considered. The de Sitter spaces are considered in coordinates fitted at best for introduction of topology for three cross sections: S 3 , S 1 x S 2 , S 1 x S 2 x S 3 . It is shown that the de Sitter space-time covered by the family of layers, each of them is topologically identical, may be covered by another family of topologically identical layers. But layers in these families will have different topology

  17. On quantization of free fields in stationary space-times

    International Nuclear Information System (INIS)

    Moreno, C.

    1977-01-01

    In Section 1 the structure of the infinite-dimensional Hamiltonian system described by the Klein-Gordon equation (free real scalar field) in stationary space-times with closed space sections, is analysed, an existence and uniqueness theorem is given for the Lichnerowicz distribution kernel G 1 together with its proper Fourier expansion, and the Hilbert spaces of frequency-part solutions defined by means of G 1 are constructed. In Section 2 an analysis, a theorem and a construction similar to the above are formulated for the free real field spin 1, mass m>0, in one kind of static space-times. (Auth.)

  18. Space-Time Geometry of Quark and Strange Quark Matter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    We study quark and strange quark matter in the context of general relativity. For this purpose, we solve Einstein's field equations for quark and strange quark matter in spherical symmetric space-times. We analyze strange quark matter for the different equations of state (EOS) in the spherical symmetric space-times, thus we are able to obtain the space-time geometries of quark and strange quark matter. Also, we discuss die features of the obtained solutions. The obtained solutions are consistent with the results of Brookhaven Laboratory, i.e. the quark-gluon plasma has a vanishing shear (i.e. quark-gluon plasma is perfect).

  19. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    Science.gov (United States)

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively

  20. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    Science.gov (United States)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods

  1. Gauge fields in algebraically special space-times

    International Nuclear Information System (INIS)

    Torres del Castillo, G.F.

    1985-01-01

    It is shown that in an algebraically special space-time which admits a congruence of null strings, a source-free gauge field aligned with the congruence is determined by a matrix potential which has to satisfy a second-order differential equation with quadratic nonlinearities. The Einstein--Yang--Mills equations are then reduced to a scalar and two matrix equations. In the case of self-dual gauge fields in a self-dual space-time, the existence of an infinite set of conservation laws, of an associated linear system, and of infinitesimal Baecklund transformations is demonstrated. All the results apply for an arbitrary gauge group

  2. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason P.; McCabe, Matthew

    2013-01-01

    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  3. Image correlation spectroscopy: mapping correlations in space, time, and reciprocal space.

    Science.gov (United States)

    Wiseman, Paul W

    2013-01-01

    This chapter presents an overview of two recent implementations of image correlation spectroscopy (ICS). The background theory is presented for spatiotemporal image correlation spectroscopy and image cross-correlation spectroscopy (STICS and STICCS, respectively) as well as k-(reciprocal) space image correlation spectroscopy (kICS). An introduction to the background theory is followed by sections outlining procedural aspects for properly implementing STICS, STICCS, and kICS. These include microscopy image collection, sampling in space and time, sample and fluorescent probe requirements, signal to noise, and background considerations that are all required to properly implement the ICS methods. Finally, procedural steps for immobile population removal and actual implementation of the ICS analysis programs to fluorescence microscopy image time stacks are described. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Voluble: a space-time diagram of the solar system

    Science.gov (United States)

    Aguilera, Julieta C.; SubbaRao, Mark U.

    2008-02-01

    Voluble is a dynamic space-time diagram of the solar system. Voluble is designed to help users understand the relationship between space and time in the motion of the planets around the sun. Voluble is set in virtual reality to relate these movements to our experience of immediate space. Beyond just the visual, understanding dynamic systems is naturally associated to the articulation of our bodies as we perform a number of complex calculations, albeit unconsciously, to deal with simple tasks. Such capabilities encompass spatial perception and memory. Voluble investigates the balance between the visually abstract and the spatially figurative in immersive development to help illuminate phenomena that are beyond the reach of human scale and time. While most diagrams, even computer-based interactive ones, are flat, three-dimensional real-time virtual reality representations are closer to our experience of space. The representation can be seen as if it was "really there," engaging a larger number of cues pertaining to our everyday spatial experience.

  5. Space-Time Water-Filling for Composite MIMO Fading Channels

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We analyze the ergodic capacity and channel outage probability for a composite MIMO channel model, which includes both fast fading and shadowing effects. The ergodic capacity and exact channel outage probability with space-time water-filling can be evaluated through numerical integrations, which can be further simplified by using approximated empirical eigenvalue and maximal eigenvalue distribution of MIMO fading channels. We also compare the performance of space-time water-filling with spatial water-filling. For MIMO channels with small shadowing effects, spatial water-filling performs very close to space-time water-filling in terms of ergodic capacity. For MIMO channels with large shadowing effects, however, space-time water-filling achieves significantly higher capacity per antenna than spatial water-filling at low to moderate SNR regimes, but with a much higher channel outage probability. We show that the analytical capacity and outage probability results agree very well with those obtained from Monte Carlo simulations.

  6. Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application

    Directory of Open Access Journals (Sweden)

    Molina Gary N Garcia

    2003-01-01

    Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.

  7. Discrete random walk models for space-time fractional diffusion

    International Nuclear Information System (INIS)

    Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

    2002-01-01

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

  8. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.

    2017-09-01

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.

  9. Re-examination of globally flat space-time.

    Directory of Open Access Journals (Sweden)

    Michael R Feldman

    Full Text Available In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.

  10. ADM Mass for Asymptotically de Sitter Space-Time

    International Nuclear Information System (INIS)

    Huang Shiming; Yue Ruihong; Jia Dongyan

    2010-01-01

    In this paper, an ADM mass formula for asymptotically de Sitter(dS) space-time is derived from the energy-momentum tensor. We take the vacuum dS space as the background and investigate the ADM mass of the (d + 3)-dimensional sphere-symmetric space with a positive cosmological constant, and find that the ADM mass of asymptotically dS space is based on the ADM mass of Schwarzschild field and the cosmological background brings some small mass contribution as well. (general)

  11. Changing Words: Time and Space in Electronic Literature

    Directory of Open Access Journals (Sweden)

    Paola Di Gennaro

    2015-05-01

    Full Text Available Printed literature and electronic literature, especially hypertexts, bring into play diverse issues of time and space. When approaching them, we should use different critical frameworks, at least in one respect: the analysis of a hypertext cannot forget considerations about time and space in the act of reading – or performing – the text. Hypertexts generate many different possible readings thanks to the changing and shifting links which move in hyperspace. Therefore, if in considering these issues in electronic literature we can obviously apply all the critical categories we use with printed works, here we cannot avoid considering the time and the space that are not “inside” the text but “outside” the text. This essay tries to explain the relationship between these external and internal time-space issues in electronic literature, how they interlink and mutually change, and how the act of reading both modifies and is modified by them. In particular, we will consider the web-based poetry When the Sea Stands Still (1997, by John Cayley and Yang Lian, and Rice (1998, by the artist known as Geniwate, basing the analysis on the studies by Espen Aarseth, Wolfgang Iser, Frank Kermode, Ted Nelson, and Edward Said.

  12. Investigation on high efficiency volume Bragg gratings performances for spectrometry in space environment

    Science.gov (United States)

    Loicq, Jérôme; Stockman, Y.; Georges, Marc; Gaspar Venancio, Luis M.

    2017-11-01

    The special properties of Volume Bragg Gratings (VBGs) make them good candidates for spectrometry applications where high spectral resolution, low level of straylight and low polarisation sensitivity are required. Therefore it is of interest to assess the maturity and suitability of VBGs as enabling technology for future ESA missions with demanding requirements for spectrometry. The VBGs suitability for space application is being investigated in the frame of a project led by CSL and funded by the European Space Agency. The goal of this work is twofold: first the theoretical advantages and drawbacks of VBGs with respect to other technologies with identical functionalities are assessed, and second the performances of VBG samples in a representative space environment are experimentally evaluated. The performances of samples of two VBGs technologies, the Photo-Thermo-Refractive (PTR) glass and the DiChromated Gelatine (DCG), are assessed and compared in the Hα, O2-B and NIR bands. The tests are performed under vacuum condition combined with temperature cycling in the range of 200 K to 300K. A dedicated test bench experiment is designed to evaluate the impact of temperature on the spectral efficiency and to determine the optical wavefront error of the diffracted beam. Furthermore the diffraction efficiency degradation under gamma irradiation is assessed. Finally the straylight, the diffraction efficiency under conical incidence and the polarisation sensitivity is evaluated.

  13. Spinors, superalgebras and the signature of space-time

    CERN Document Server

    Ferrara, S.

    2001-01-01

    Superconformal algebras embedding space-time in any dimension and signature are considered. Different real forms of the $R$-symmetries arise both for usual space-time signature (one time) and for Euclidean or exotic signatures (more than one times). Application of these superalgebras are found in the context of supergravities with 32 supersymmetries, in any dimension $D \\leq 11$. These theories are related to $D = 11, M, M^*$ and $M^\\prime$ theories or $D = 10$, IIB, IIB$^*$ theories when compactified on Lorentzian tori. All dimensionally reduced theories fall in three distinct phases specified by the number of (128 bosonic) positive and negative norm states: $(n^+,n^-) = (128,0), (64,64), (72,56)$.

  14. 166 Spatialization of Time and Temporalization of Space: A Critical ...

    African Journals Online (AJOL)

    Ngozi Ezenwa-Ohaeto

    changing and this made some people to take time to be equivalent to .... and these facts are seen as the very essence of time. He argued that .... against our conventional belief about time. Is there no time? ..... space and whatever is false of space is also false of time. .... them as co-existing in orderly manner with a simple.

  15. MEST- avoid next extinction by a space-time effect

    Science.gov (United States)

    Cao, Dayong

    2013-03-01

    Sun's companion-dark hole seasonal took its dark comets belt and much dark matter to impact near our earth. And some of them probability hit on our earth. So this model kept and triggered periodic mass extinctions on our earth every 25 to 27 million years. After every impaction, many dark comets with very special tilted orbits were arrested and lurked in solar system. When the dark hole-Tyche goes near the solar system again, they will impact near planets. The Tyche, dark comet and Oort Cloud have their space-time center. Because the space-time are frequency and amplitude square of wave. Because the wave (space-time) can make a field, and gas has more wave and fluctuate. So they like dense gas ball and a dark dense field. They can absorb the space-time and wave. So they are ``dark'' like the dark matter which can break genetic codes of our lives by a dark space-time effect. So the upcoming next impaction will cause current ``biodiversity loss.'' The dark matter can change dead plants and animals to coal, oil and natural gas which are used as energy, but break our living environment. According to our experiments, which consciousness can use thought waves remotely to change their systemic model between Electron Clouds and electron holes of P-N Junction and can change output voltages of solar cells by a life information technology and a space-time effect, we hope to find a new method to the orbit of the Tyche to avoid next extinction. (see Dayong Cao, BAPS.2011.APR.K1.17 and BAPS.2012.MAR.P33.14) Support by AEEA

  16. On the minimum uncertainty of space-time geodesics

    International Nuclear Information System (INIS)

    Diosi, L.; Lukacs, B.

    1989-10-01

    Although various attempts for systematic quantization of the space-time geometry ('gravitation') have appeared, none of them is considered fully consistent or final. Inspired by a construction of Wigner, the quantum relativistic limitations of measuring the metric tensor of a certain space-time were calculated. The result is suggested to be estimate for fluctuations of g ab whose rigorous determination will be a subject of a future relativistic quantum gravity. (author) 11 refs

  17. The JPL space photovoltaic program. [energy efficient so1 silicon solar cells for space applications

    Science.gov (United States)

    Scott-Monck, J. A.

    1979-01-01

    The development of energy efficient solar cells for space applications is discussed. The electrical performance of solar cells as a function of temperature and solar intensity and the influence of radiation and subsequent thermal annealing on the electrical behavior of cells are among the factors studied. Progress in GaAs solar cell development is reported with emphasis on improvement of output power and radiation resistance to demonstrate a solar cell array to meet the specific power and stability requirements of solar power satellites.

  18. A method of evaluating efficiency during space-suited work in a neutral buoyancy environment

    Science.gov (United States)

    Greenisen, Michael C.; West, Phillip; Newton, Frederick K.; Gilbert, John H.; Squires, William G.

    1991-01-01

    The purpose was to investigate efficiency as related to the work transmission and the metabolic cost of various extravehicular activity (EVA) tasks during simulated microgravity (whole body water immersion) using three space suits. Two new prototype space station suits, AX-5 and MKIII, are pressurized at 57.2 kPa and were tested concurrently with the operationally used 29.6 kPa shuttle suit. Four male astronauts were asked to perform a fatigue trial on four upper extremity exercises during which metabolic rate and work output were measured and efficiency was calculated in each suit. The activities were selected to simulate actual EVA tasks. The test article was an underwater dynamometry system to which the astronauts were secured by foot restraints. All metabolic data was acquired, calculated, and stored using a computerized indirect calorimetry system connected to the suit ventilation/gas supply control console. During the efficiency testing, steady state metabolic rate could be evaluated as well as work transmitted to the dynamometer. Mechanical efficiency could then be calculated for each astronaut in each suit performing each movement.

  19. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  20. Quaternionic formulation of tachyons, superluminal transformations and a complex space-time

    Energy Technology Data Exchange (ETDEWEB)

    Imaeda, K [Dublin Inst. for Advanced Studies (Ireland)

    1979-04-11

    A theory of tachyons and superluminal transformations is developed on the basis of the quaternionic formulation. A complex space-time adn a complex transformation group which contains both Lorentz transformations and superluminal transformations are introduced. The complex space-time '' the biquaternion space'' which is closed under the superluminal transformations is introduced. The principle of special relativity, such as the conservation of the quadratic form of the metric of the space-time, and the principle of duality are extended to the complex space-time and to bradyons, luxons and tachyons under the complex transformations. SeVeral characteristic features of the superluminal transformations and of tachyons are derived.

  1. Use of geostatistics in high level radioactive waste repository site characterization

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, P G [Pacific Northwest Laboratory, Richland, WA (USA)

    1980-09-01

    In evaluating and characterizing sites that are candidates for use as repositories for high-level radioactive waste, there is an increasing need to estimate the uncertainty in hydrogeologic data and in the quantities calculated from them. This paper discusses the use of geostatistical techniques to estimate hydrogeologic surfaces, such as the top of a basalt formation, and to provide a measure of the uncertainty in that estimate. Maps of the uncertainty estimate, called the kriging error, can be used to evaluate where new data should be taken to affect the greatest reduction in uncertainty in the estimated surface. The methods are illustrated on a set of site-characterization data; the top-of-basalt elevations at the Hanford Site near Richland, Washington.

  2. Time and space: undergraduate Mexican physics in motion

    Science.gov (United States)

    Candela, Antonia

    2010-09-01

    This is an ethnographic study of the trajectories and itineraries of undergraduate physics students at a Mexican university. In this work learning is understood as being able to move oneself and, other things (cultural tools), through the space-time networks of a discipline (Nespor in Knowledge in motion: space, time and curriculum in undergraduate physics and management. Routledge Farmer, London, 1994). The potential of this socio-cultural perspective allows an analysis of how students are connected through extended spaces and times with an international core discipline as well as with cultural features related to local networks of power and construction. Through an example, I show that, from an actor-network-theory (Latour in Science in action. Harvard University Press, Cambridge, 1987), that in order to understand the complexities of undergraduate physics processes of learning you have to break classroom walls and take into account students' movements through complex spatial and temporal traces of the discipline of physics. Mexican professors do not give classes following one textbook but in a moment-to-moment open dynamism tending to include undergraduate students as actors in classroom events extending the teaching space-time of the classroom to the disciplinary research work of physics. I also find that Mexican undergraduate students show initiative and display some autonomy and power in the construction of their itineraries as they are encouraged to examine a variety of sources including contemporary research articles, unsolved physics problems, and even to participate in several physicists' spaces, as for example being speakers at the national congresses of physics. Their itineraries also open up new spaces of cultural and social practices, creating more extensive networks beyond those associated with a discipline. Some economic, historical and cultural contextual features of this school of sciences are analyzed in order to help understanding the particular

  3. Entropy of space-time outcome in a movement speed-accuracy task.

    Science.gov (United States)

    Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M

    2015-12-01

    The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Space-time modeling of electricity spot prices

    DEFF Research Database (Denmark)

    Abate, Girum Dagnachew; Haldrup, Niels

    In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... in the spot price dynamics. Estimation of the spatial Durbin model show that the spatial lag variable is as important as the temporal lag variable in describing the spot price dynamics. We use the partial derivatives impact approach to decompose the price impacts into direct and indirect effects and we show...... that price effects transmit to neighboring markets and decline with distance. In order to examine the evolution of the spatial correlation over time, a time varying parameters spot price spatial Durbin model is estimated using recursive estimation. It is found that the spatial correlation within the Nord...

  5. Black holes in loop quantum gravity: the complete space-time.

    Science.gov (United States)

    Gambini, Rodolfo; Pullin, Jorge

    2008-10-17

    We consider the quantization of the complete extension of the Schwarzschild space-time using spherically symmetric loop quantum gravity. We find an exact solution corresponding to the semiclassical theory. The singularity is eliminated but the space-time still contains a horizon. Although the solution is known partially numerically and therefore a proper global analysis is not possible, a global structure akin to a singularity-free Reissner-Nordström space-time including a Cauchy horizon is suggested.

  6. Time-Lapse Analysis of Methane Quantity in the Mary Lee Group of Coal Seams Using Filter-Based Multiple-Point Geostatistical Simulation.

    Science.gov (United States)

    Karacan, C Özgen; Olea, Ricardo A

    2013-08-01

    Coal seam degasification and its success are important for controlling methane, and thus for the health and safety of coal miners. During the course of degasification, properties of coal seams change. Thus, the changes in coal reservoir conditions and in-place gas content as well as methane emission potential into mines should be evaluated by examining time-dependent changes and the presence of major heterogeneities and geological discontinuities in the field. In this work, time-lapsed reservoir and fluid storage properties of the New Castle coal seam, Mary Lee/Blue Creek seam, and Jagger seam of Black Warrior Basin, Alabama, were determined from gas and water production history matching and production forecasting of vertical degasification wellbores. These properties were combined with isotherm and other important data to compute gas-in-place (GIP) and its change with time at borehole locations. Time-lapsed training images (TIs) of GIP and GIP difference corresponding to each coal and date were generated by using these point-wise data and Voronoi decomposition on the TI grid, which included faults as discontinuities for expansion of Voronoi regions. Filter-based multiple-point geostatistical simulations, which were preferred in this study due to anisotropies and discontinuities in the area, were used to predict time-lapsed GIP distributions within the study area. Performed simulations were used for mapping spatial time-lapsed methane quantities as well as their uncertainties within the study area. The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition

  7. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  8. Geostatistical analysis of potentiometric data in Wolfcamp aquifer of the Palo Duro Basin, Texas

    International Nuclear Information System (INIS)

    Harper, W.V.; Furr, J.M.

    1986-04-01

    This report details a geostatistical analysis of potentiometric data from the Wolfcamp aquifer in the Palo Duro Basin, Texas. Such an analysis is a part of an overall uncertainty analysis for a high-level waste repository in salt. Both an expected potentiometric surface and the associated standard error surface are produced. The Wolfcamp data are found to be well explained by a linear trend with a superimposed spherical semivariogram. A cross-validation of the analysis confirms this. In addition, the cross-validation provides a point-by-point check to test for possible anomalous data

  9. Vacuum polarization on black hole space times

    International Nuclear Information System (INIS)

    Jensen, B.P.

    1985-01-01

    The effects of vacuum polarization in black hole space times are examined. Particular attention is given to the vacuum physics inside the event horizon. The analytic properties of the solutions to the radial wave equation in Schwarzs child space time as functions of argument, frequency, and angular momentum are given. These functions are employed to define the Feynmann Green function (G/sub F/(x,x') for a scalar field subject to the Hartle-Hawking boundary conditions. An examination of the Schwarzschild mode functions near r = 0 is provided. This work is necessary background for a future calculation of 2 > and the quantum stress-energy tensor for small r. Some opinions are given on how this calculation might be performed. A solution of the one-loop Einstein equations for Schwarzs child Anti-deSitter (SAdS) space time is presented, using Page's approximation to the quantum stress tensor. The resulting perturbed metric is shown to be unphysical, as it leads to a system of fields with infinite total energy. This problem is believed to be due to a failure of Page's method in SAdS. Suggestions are given on how one might correct the method

  10. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  11. Time-Space Trade-Offs for the Longest Common Substring Problem

    DEFF Research Database (Denmark)

    Starikovskaya, Tatiana; Vildhøj, Hjalte Wedel

    2013-01-01

    The Longest Common Substring problem is to compute the longest substring which occurs in at least d ≥ 2 of m strings of total length n. In this paper we ask the question whether this problem allows a deterministic time-space trade-off using O(n1+ε) time and O(n1-ε) space for 0 ≤ ε ≤ 1. We give a ...... a positive answer in the case of two strings (d = m = 2) and 0 can be solved in O(n1-ε) space and O(n1+ε log2n (d log2n + d2)) time for any 0 ≤ ε ...The Longest Common Substring problem is to compute the longest substring which occurs in at least d ≥ 2 of m strings of total length n. In this paper we ask the question whether this problem allows a deterministic time-space trade-off using O(n1+ε) time and O(n1-ε) space for 0 ≤ ε ≤ 1. We give...

  12. Flat synchronizations in spherically symmetric space-times

    International Nuclear Information System (INIS)

    Herrero, Alicia; Morales-Lladosa, Juan Antonio

    2010-01-01

    It is well known that the Schwarzschild space-time admits a spacelike slicing by flat instants and that the metric is regular at the horizon in the associated adapted coordinates (Painleve-Gullstrand metric form). We consider this type of flat slicings in an arbitrary spherically symmetric space-time. The condition ensuring its existence is analyzed, and then, we prove that, for any spherically symmetric flat slicing, the densities of the Weinberg momenta vanish. Finally, we deduce the Schwarzschild solution in the extended Painleve-Gullstrand-LemaItre metric form by considering the coordinate decomposition of the vacuum Einstein equations with respect to a flat spacelike slicing.

  13. Quantum universe on extremely small space-time scales

    International Nuclear Information System (INIS)

    Kuzmichev, V.E.; Kuzmichev, V.V.

    2010-01-01

    The semiclassical approach to the quantum geometrodynamical model is used for the description of the properties of the Universe on extremely small space-time scales. Under this approach, the matter in the Universe has two components of the quantum nature which behave as antigravitating fluids. The first component does not vanish in the limit h → 0 and can be associated with dark energy. The second component is described by an extremely rigid equation of state and goes to zero after the transition to large spacetime scales. On small space-time scales, this quantum correction turns out to be significant. It determines the geometry of the Universe near the initial cosmological singularity point. This geometry is conformal to a unit four-sphere embedded in a five-dimensional Euclidean flat space. During the consequent expansion of the Universe, when reaching the post-Planck era, the geometry of the Universe changes into that conformal to a unit four-hyperboloid in a five-dimensional Lorentzsignatured flat space. This agrees with the hypothesis about the possible change of geometry after the origin of the expanding Universe from the region near the initial singularity point. The origin of the Universe can be interpreted as a quantum transition of the system from a region in the phase space forbidden for the classical motion, but where a trajectory in imaginary time exists, into a region, where the equations of motion have the solution which describes the evolution of the Universe in real time. Near the boundary between two regions, from the side of real time, the Universe undergoes almost an exponential expansion which passes smoothly into the expansion under the action of radiation dominating over matter which is described by the standard cosmological model.

  14. Space-Time Data Fusion

    Science.gov (United States)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  15. 1.114-gb/s time/space division switch system

    Science.gov (United States)

    Pawelski, Robert L.; Nordin, Ronald A.; Huisman, R. F.; Kelly, S.; Payne, William A.; Veach, R. S.

    1990-10-01

    Advanced digital communication services11 such as Broadband ISDN High Definition Television (HDTV) and enhanced data networking are expected to require high bandwidth and fast reconfiguration time switching centers available in the 1990''s. Digital GaAs IC''s can allow the implementation of these switching centers providing these services efficiently and at a low cost. The low cost arises from the reduction in hardware power maintenance etc. when the switch is designed to operate at the incoming data rate instead of at a lower rate. In order to utilize the capacity of a high bandwidth data link time division multiplexing is employed. This is a technique where multiple digital signals are interleaved (bit byte or block) on one data link. Clearly it is advantageous to have a switch that not only has a large bandwidth but can reconfigure at the data rate so as to provide bit byte or block switching functions thus being compatible with many different transmission formats. We present an experimental Time/Space Division Switch System capable of operating at over 1 Gb/s. Both custom and commercial Gallium Arsenide (GaAs) devices are used in the design of the various system functional blocks. These functional blocks include a Time Slot Interchanger (TSI) Time Multiplexed Switch (TMS) TMS Controller Multiplexer and Demultiplexers. In addition to the system overview we discuss such issues as printed circuit board microwave interconnections and CAD tools for high speed

  16. Space Network Time Distribution and Synchronization Protocol Development for Mars Proximity Link

    Science.gov (United States)

    Woo, Simon S.; Gao, Jay L.; Mills, David

    2010-01-01

    Time distribution and synchronization in deep space network are challenging due to long propagation delays, spacecraft movements, and relativistic effects. Further, the Network Time Protocol (NTP) designed for terrestrial networks may not work properly in space. In this work, we consider the time distribution protocol based on time message exchanges similar to Network Time Protocol (NTP). We present the Proximity-1 Space Link Interleaved Time Synchronization (PITS) algorithm that can work with the CCSDS Proximity-1 Space Data Link Protocol. The PITS algorithm provides faster time synchronization via two-way time transfer over proximity links, improves scalability as the number of spacecraft increase, lowers storage space requirement for collecting time samples, and is robust against packet loss and duplication which underlying protocol mechanisms provide.

  17. Mapping aboveground woody biomass using forest inventory, remote sensing and geostatistical techniques.

    Science.gov (United States)

    Yadav, Bechu K V; Nandy, S

    2015-05-01

    Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping.

  18. Time-space trade-offs for lempel-ziv compressed indexing

    DEFF Research Database (Denmark)

    Bille, Philip; Ettienne, Mikko Berggren; Gørtz, Inge Li

    2017-01-01

    Given a string S, the compressed indexing problem is to preprocess S into a compressed representation that supports fast substring queries. The goal is to use little space relative to the compressed size of S while supporting fast queries. We present a compressed index based on the Lempel-Ziv 1977...... compression scheme. Let n, and z denote the size of the input string, and the compressed LZ77 string, respectively. We obtain the following time-space trade-offs. Given a pattern string P of length m, we can solve the problem in (i) O (m + occ lg lg n) time using O(z lg(n/z) lg lg z) space, or (ii) (m (1...... best space bound, but has a leading term in the query time of O(m(1 + lgϵ z/lg(n/z))). However, for any polynomial compression ratio, i.e., z = O(n1-δ), for constant δ > 0, this becomes O(m). Our index also supports extraction of any substring of length ℓ in O(ℓ + lg(n/z)) time. Technically, our...

  19. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  20. Geostatistical modelling of the spatial life history of post-larval deepwater hake Merluccius paradoxus in the Benguela Current Large Marine Ecosystem

    DEFF Research Database (Denmark)

    Jansen, T; Kristensen, K; Fairweather, T. P.

    2017-01-01

    paradoxus are not reflected in the current assessment and management practices for the Benguela Current Large Marine Ecosystem. In this study, we compiled data from multiple demersal trawl surveys from the entire distribution area and applied state-of the-art geostatistical population modelling (Geo...

  1. The space-time cube revisited it potential to visualize mobile data

    DEFF Research Database (Denmark)

    Kveladze, Irma; Kraak, Menno-Jan

    2010-01-01

    and analyse the complex movement patterns (COST - MOVE, 2009; Keim et al., 2008). This results in the development of new visual analytical and exploratory tools, while existing solutions receive new attention (Andrienko et al., 2007). Among the last the Space Time Cube (STC) can be grouped. It has the ability...... to provide information about spatial and temporal relationships. The original idea of STC was introduced by Hägerstrand (1970). It represents an elegant framework to study spatio-temporal characteristics of human activity (Kraak and Koussoulakou, 2005). The vertical dimension of cube represents time (t......), while horizontal axes represent space (x, y). Basic elements represented in the cube are the Space-time Path (STP), Stations, and the Space Time Prism (STP). The STP represents the continuous activities of movements undertaken in space and time displayed as trajectory. It has been studied...

  2. The evolution of conceptions about space and time in literary theory

    Directory of Open Access Journals (Sweden)

    Lazić Nebojša J.

    2012-01-01

    Full Text Available This work considers the function of space and time in poetics of literary text from the antique period till the theory of deconstruction as well as from Aristotle till Jacques Derrida and Paul de Man. The science of literature did not equally treat the problem of space and the problem of time as the elements of the literary work's structure. Disbalance presents the damage of studying the space because there is a significant number of monographs about time. Since the categories of space and time are the areas of studying physical and spiritual sciences, it was necessary to pay attention to considering these questions in exact sciences such as Physics, Maths etc. Further development of the science of literature is not possible without describing the role of space and time in writing and shaping a literary text. .

  3. A geostatistical approach to identify and mitigate agricultural nitrous oxide emission hotspots.

    Science.gov (United States)

    Turner, P A; Griffis, T J; Mulla, D J; Baker, J M; Venterea, R T

    2016-12-01

    Anthropogenic emissions of nitrous oxide (N 2 O), a trace gas with severe environmental costs, are greatest from agricultural soils amended with nitrogen (N) fertilizer. However, accurate N 2 O emission estimates at fine spatial scales are made difficult by their high variability, which represents a critical challenge for the management of N 2 O emissions. Here, static chamber measurements (n=60) and soil samples (n=129) were collected at approximately weekly intervals (n=6) for 42-d immediately following the application of N in a southern Minnesota cornfield (15.6-ha), typical of the systems prevalent throughout the U.S. Corn Belt. These data were integrated into a geostatistical model that resolved N 2 O emissions at a high spatial resolution (1-m). Field-scale N 2 O emissions exhibited a high degree of spatial variability, and were partitioned into three classes of emission strength: hotspots, intermediate, and coldspots. Rates of emission from hotspots were 2-fold greater than non-hotspot locations. Consequently, 36% of the field-scale emissions could be attributed to hotspots, despite representing only 21% of the total field area. Variations in elevation caused hotspots to develop in predictable locations, which were prone to nutrient and moisture accumulation caused by terrain focusing. Because these features are relatively static, our data and analyses indicate that targeted management of hotspots could efficiently reduce field-scale emissions by as much 17%, a significant benefit considering the deleterious effects of atmospheric N 2 O. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. How to use the cosmological Schwinger principle for energy flux, entropy, and 'atoms of space-time' to create a thermodynamic space-time and multiverse

    International Nuclear Information System (INIS)

    Beckwith, Andrew

    2011-01-01

    We make explicit an idea by Padmanabhan in DICE 2010, as to finding 'atoms of space-time' permitting a thermodynamic treatment of emergent structure similar to Gibbs treatment of statistical physics. That is, an ensemble of gravitons is used to give an 'atom' of space-time congruent with relic GW. The idea is to reduce the number of independent variables to get a simple emergent space-time structure of entropy. An electric field, based upon the cosmological Schwinger principle, is linked to relic heat flux, with entropy production tied in with candidates as to inflaton potentials. The effective electric field links with the Schwinger 1951s result of an E field leading to pairs of e + e - charges nucleated in space-time volume V · t. Note that in most inflationary models, the assumption is for a magnetic field, not an electric field. An electric field permits a kink-anti-kink construction of an emergent structure, which includes Glinka's recent pioneering approach to a Multiverse. Also an E field allows for an emergent relic particle frequency range between one and 100 GHz. The novel contribution is a relic E field, instead of a B field, in relic space-time 'atom' formation and vacuum nucleation of the same.

  5. Theorizing Space-Time Relations in Education: The Concept of Chronotope

    Science.gov (United States)

    Ritella, Giuseppe; Ligorio, Maria Beatrice; Hakkarainen, Kai

    2016-01-01

    Due to ongoing cultural-historical transformations, the space-time of learning is radically changing, and theoretical conceptualizations are needed to investigate how such evolving space-time frames can function as a ground for learning. In this article, we argue that the concept of chronotope--from Greek chronos and topos, meaning time and…

  6. Stringy Fuzziness as the Custodial of Time-Space Noncommutativity

    CERN Document Server

    Barbón, José L F

    2000-01-01

    We study aspects of obtaining field theories with noncommuting time-space coordinates as limits of open-string theories in constant electric-field backgrounds. We find that, within the standard closed-string backgrounds, there is an obstruction to decoupling the time-space noncommutativity scale from that of the string fuzziness scale. We speculate that this censorship may be string-theory's way of protecting the causality and unitarity structure. We study the moduli space of the obstruction in terms of the open- and closed-string backgrounds. Cases of both zero and infinite brane tensions as well as zero string couplings are obtained. A decoupling can be achieved formally by considering complex values of the dilaton and inverting the role of space and time of the light cone. This is reminiscent of a black-hole horizon. We study the corresponding supergravity solution in the large-N limit and find that the geometry has a naked singularity at the physical scale of noncommutativity.

  7. Stringy fuzziness as the custodian of time-space noncommutativity

    CERN Document Server

    Barbón, José L F

    2000-01-01

    We study aspects of obtaining field theories with noncommuting time- space coordinates as limits of open-string theories in constant electric-field backgrounds. We find that, within the standard closed- string backgrounds, there is an obstruction to decoupling the time- space noncommutativity scale from that of the string fuzziness scale. We speculate that this censorship may be string-theory's way of protecting the causality and unitarity structure. We study the moduli space of the obstruction in terms of the open- and closed-string backgrounds. Cases of both zero and infinite brane tensions as well as zero string couplings are obtained. A decoupling can be achieved formally by considering complex values of the dilaton and inverting the role of space and time in the light cone. This is reminiscent of a black-hole horizon. We study the corresponding supergravity solution in the large-N limit and find that the geometry has a naked singularity at the physical scale of noncommutativity. (23 refs).

  8. Geostatistical characterisation of geothermal parameters for a thermal aquifer storage site in Germany

    Science.gov (United States)

    Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.

    2009-04-01

    The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.

  9. Space, time, matter

    CERN Document Server

    Weyl, Hermann

    1922-01-01

    Excellent introduction probes deeply into Euclidean space, Riemann's space, Einstein's general relativity, gravitational waves and energy, and laws of conservation. "A classic of physics." - British Journal for Philosophy and Science.

  10. Efficient O(N) recursive computation of the operational space inertial matrix

    International Nuclear Information System (INIS)

    Lilly, K.W.; Orin, D.E.

    1993-01-01

    The operational space inertia matrix Λ reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix Λ also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute Λ has a computational complexity of O(N 3 ) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing Λ for N ≥ 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base

  11. Space can substitute for time in predicting climate-change effects on biodiversity.

    Science.gov (United States)

    Blois, Jessica L; Williams, John W; Fitzpatrick, Matthew C; Jackson, Stephen T; Ferrier, Simon

    2013-06-04

    "Space-for-time" substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption--that drivers of spatial gradients of species composition also drive temporal changes in diversity--rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as "time-for-time" predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.

  12. They Make Space and Give Time

    Indian Academy of Sciences (India)

    ... Resonance – Journal of Science Education; Volume 3; Issue 3. They Make Space and Give Time The Engineer as Poet. Gangan Prathap. Book Review Volume 3 ... Author Affiliations. Gangan Prathap1. National Aerospace Laboratories and the Jawaharlal Nehru Centre for Advanced Scientific Research in Bangalore.

  13. Space-time and Local Gauge Symmetries

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Symmetries of Particle Physics: Space-time and Local Gauge Symmetries. Sourendu Gupta. General Article Volume 6 Issue 2 February 2001 pp 29-38. Fulltext. Click here to view fulltext PDF. Permanent link:

  14. Late time solution for interacting scalar in accelerating spaces

    Energy Technology Data Exchange (ETDEWEB)

    Prokopec, Tomislav, E-mail: t.prokopec@uu.nl [Institute for Theoretical Physics, Spinoza Institute and EMME$\\Phi$, Utrecht University, Postbus 80.195, Utrecht, 3508 TD The Netherlands (Netherlands)

    2015-11-01

    We consider stochastic inflation in an interacting scalar field in spatially homogeneous accelerating space-times with a constant principal slow roll parameter ε. We show that, if the scalar potential is scale invariant (which is the case when scalar contains quartic self-interaction and couples non-minimally to gravity), the late-time solution on accelerating FLRW spaces can be described by a probability distribution function (PDF) ρ which is a function of φ/H only, where φ=φ( x-vector ) is the scalar field and H=H(t) denotes the Hubble parameter. We give explicit late-time solutions for ρarrow ρ{sub ∞}(φ/H), and thereby find the order ε corrections to the Starobinsky-Yokoyama result. This PDF can then be used to calculate e.g. various n-point functions of the (self-interacting) scalar field, which are valid at late times in arbitrary accelerating space-times with ε= constant.

  15. A geometric renormalization group in discrete quantum space-time

    International Nuclear Information System (INIS)

    Requardt, Manfred

    2003-01-01

    We model quantum space-time on the Planck scale as dynamical networks of elementary relations or time dependent random graphs, the time dependence being an effect of the underlying dynamical network laws. We formulate a kind of geometric renormalization group on these (random) networks leading to a hierarchy of increasingly coarse-grained networks of overlapping lumps. We provide arguments that this process may generate a fixed limit phase, representing our continuous space-time on a mesoscopic or macroscopic scale, provided that the underlying discrete geometry is critical in a specific sense (geometric long range order). Our point of view is corroborated by a series of analytic and numerical results, which allow us to keep track of the geometric changes, taking place on the various scales of the resolution of space-time. Of particular conceptual importance are the notions of dimension of such random systems on the various scales and the notion of geometric criticality

  16. A bivariate space-time downscaler under space and time misalignment.

    Science.gov (United States)

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M

    2010-12-01

    Ozone and particulate matter PM(2.5) are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model based approach for fusing these two sources of information for the pair of co-pollutants which is computationally feasible over large spatial regions and long periods of time. Due to the association between concentration levels of the two environmental contaminants, it is expected that information regarding one will help to improve prediction of the other. Misalignment is an obvious issue since the monitoring networks for the two contaminants only partly intersect and because the collection rate for PM(2.5) is typically less frequent than that for ozone.Extending previous work in Berrocal et al. (2009), we introduce a bivariate downscaler that provides a flexible class of bivariate space-time assimilation models. We discuss computational issues for model fitting and analyze a dataset for ozone and PM(2.5) for the ozone season during year 2002. We show a modest improvement in predictive performance, not surprising in a setting where we can anticipate only a small gain.

  17. Real Time Space Weather Support for Chandra X-ray Observatory Operations

    Science.gov (United States)

    O'Dell, S. L.; Miller, S.; Minow, J. I.; Wolk, S.; Aldcroft, T. L.; Spitzbart, B. D.; Swartz, D. A.

    2012-12-01

    NASA launched the Chandra X-ray Observatory in July 1999. Soon after first light in August 1999, however, degradation in the energy resolution and charge transfer efficiency of the Advanced CCD Imaging Spectrometer (ACIS) x-ray detectors was observed. The source of the degradation was quickly identified as radiation damage in the charge-transfer channel of the front-illuminated CCDs, by weakly penetrating ("soft", 100-500 keV) protons as Chandra passed through the Earth's radiation belts and ring currents. As soft protons were not considered a risk to spacecraft health before launch, the only on-board radiation monitoring system is the Electron, Proton, and Helium Instrument (EPHIN) which was included on Chandra with the primary purpose of monitoring energetic solar particle events. Further damage to the ACIS detector has been successfully mitigated through a combination of careful mission planning, autonomous on-board radiation protection, and manual intervention based upon real-time monitoring of the soft-proton environment. The AE-8 and AP-8 trapped radiation models and Chandra Radiation Models are used to schedule science operations in regions of low proton flux. EPHIN has been used as the primary autonomous in-situ radiation trigger; but, it is not sensitive to the soft protons that damage the front-illuminated CCDs. Monitoring of near-real-time space weather data sources provides critical information on the proton environment outside the Earth's magnetosphere due to solar proton events and other phenomena. The operations team uses data from the Geostationary Operational Environmental Satellites (GOES) to provide near-real-time monitoring of the proton environment; however, these data do not give a representative measure of the soft-proton (real-time data provided by NOAA's Space Weather Prediction Center. This presentation will discuss radiation mitigation against proton damage, including models and real-time data sources used to protect the ACIS detector

  18. The Thaayorre think of Time Like They Talk of Space.

    Science.gov (United States)

    Gaby, Alice

    2012-01-01

    Around the world, it is common to both talk and think about time in terms of space. But does our conceptualization of time simply reflect the space/time metaphors of the language we speak? Evidence from the Australian language Kuuk Thaayorre suggests not. Kuuk Thaayorre speakers do not employ active spatial metaphors in describing time. But this is not to say that spatial language is irrelevant to temporal construals: non-linguistic representations of time are shown here to covary with the linguistic system of describing space. This article contrasts two populations of ethnic Thaayorre from Pormpuraaw - one comprising Kuuk Thaayorre/English bilinguals and the other English-monolinguals - in order to distinguish the effects of language from environmental and other factors. Despite their common physical, social, and cultural context, the two groups differ in their representations of time in ways that are congruent with the language of space in Kuuk Thaayorre and English, respectively. Kuuk Thaayorre/English bilinguals represent time along an absolute east-to-west axis, in alignment with the high frequency of absolute frame of reference terms in Kuuk Thaayorre spatial description. The English-monolinguals, in contrast, represent time from left-to-right, aligning with the dominant relative frame of reference in English spatial description. This occurs in the absence of any east-to-west metaphors in Kuuk Thaayorre, or left-to-right metaphors in English. Thus the way these two groups think about time appears to reflect the language of space and not the language of time.

  19. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  20. Numerical simulation of a cabin ventilation subsystem in a space station oriented real-time system

    Directory of Open Access Journals (Sweden)

    Zezheng QIU

    2017-12-01

    Full Text Available An environment control and life support system (ECLSS is an important system in a space station. The ECLSS is a typical complex system, and the real-time simulation technology can help to accelerate its research process by using distributed hardware in a loop simulation system. An implicit fixed time step numerical integration method is recommended for a real-time simulation system with time-varying parameters. However, its computational efficiency is too low to satisfy the real-time data interaction, especially for the complex ECLSS system running on a PC cluster. The instability problem of an explicit method strongly limits its application in the ECLSS real-time simulation although it has a high computational efficiency. This paper proposes an improved numerical simulation method to overcome the instability problem based on the explicit Euler method. A temperature and humidity control subsystem (THCS is firstly established, and its numerical stability is analyzed by using the eigenvalue estimation theory. Furthermore, an adaptive operator is proposed to avoid the potential instability problem. The stability and accuracy of the proposed method are investigated carefully. Simulation results show that this proposed method can provide a good way for some complex time-variant systems to run their real-time simulation on a PC cluster. Keywords: Numerical integration method, Real-time simulation, Stability, THCS, Time-variant system